The regulatory framework governing safe navigation has historically incorporated objective standards involving a human element. The IMO Convention on the International Regulations for Preventing Collisions at Sea, 1972 (COLREGs) are no exception – these provide rules on vessel priority but also allow for deviations where circumstances require (e.g. Rule 2, which allows deviations if required by the “ordinary practice of seaman”).
As such, there has been significant discussion as to whether autonomous vessels can comply with the COLREGs, particularly Rule 2 (responsibility), Rule 8 (action to avoid-collision), Rule 5 (look-out) and Rule 18 (responsibilities between vessels).
In this context, the incident between the VLCC Alexander 1 and containership Ever Smart as judged in Nautical Challenge Limited v Evergreen Marine (UK) Limited  EWCA Civ 2173 (Evergreen) demonstrates two issues: (i) the identity of the “give way” vessel may not be readily apparent to experienced deck officers; and (ii) “good seafarer behaviour” is not a fixed standard, but a product of factual circumstance, interpreted through the COLREGs, case law, and the views of expert nautical assessors post-event. The challenge for developers is to address these issues pre-event in a predictable way, or otherwise rely on machine learning to ensure compliance with regulations.
The arguments in Evergreen demonstrate an immediate problem – how do developers account for difficult questions of law, such as a conflict between two provisions, or whether a rule applies in unusual circumstances?
In addressing the apparent conflict between the narrow channel rule (Rule 9) and the crossing rule (Rule 15), the first instance judge relied on statements of principle from two non-binding cases – The Canberra Star  and Kulemesin v HKSAR  – because of the “experience and knowledge” of the respective judges and because he agreed with the stated principles. If the solution to this conflict was not apparent to two experienced Masters, and required an examination of case law and a Court of Appeal judgment, could autonomous vessels have identified their obligations under the COLREGs?
Subscribe to our newsletter!
Furthermore, in determining whether the crossing rule applied, the first instance judge considered whether the VLCC Alexandra I (Nautical Challenge Limited) was on a “sufficiently defined course.” Alexandra I’s course made good varied between 081 and 127 degrees at less than two knots over the ground. She had travelled less than a mile in 20 minutes.
Although the Court determined that this was not a “sufficiently defined course”, it failed to clarify when a vessel (either by speed, line or heading) would be on a “sufficiently defined course”. Rather, the test requires an observer (who has spent “sufficient time” observing the vessel) to ascertain if the vessel is not on a defined course. This raises an obvious concern – what degree of variation/speed should an AI system deem sufficient to constitute a “sufficiently defined course”?
Nevertheless, it is possible that, had the vessels been autonomous, the collision could have been avoided, or the damage sustained from the collision reduced, as the AI could have prevented the “human errors” identified in Evergreen as contributing to the collision. Many maritime casualties are in fact caused by a series of minor human errors, such as the officer on watch being distracted, particularly in congested waters.
There is reason to question the Alexandra I’s early arrival at the approach channel and the Vessel Traffic Service Officer’s approval to proceed to the channel entrance buoys at the time. Additionally, Alexandra I’s AIS was not operating and she failed to maintain a good aural lookout.
By contrast, autonomous vessels would presumably operate enhanced AIS, GPS, radar, a suite of sensors and cameras (including thermal and infra-red), and predictive control algorithms to track and anticipate future vessel movements. Within congested areas, automated VTS (or eNAV) could inform vessels manoeuvering within such areas of potential collision risks in real time. The Maritime and Port Authority of Singapore has already trialled such systems, with provisional results showing that the AI could “quantify risk in more detail and more quickly than it could be detected by human operators.”
Additionally, standardised messaging formats may reduce miscommunication (e.g. not identifying the relevant vessel) or miscomprehension (e.g. due to linguistic issues) and increase the speed of communication of collision threats.
Had these technologies been used in Evergreen, the collision risk may have been identified substantially sooner than the “three seconds” in which the Master of Ever Smart realised a collision was inevitable.
In the judgment, the greatest “causative potency” for the collision came from the Ever Smart proceeding along the port side of the narrow channel, her excessive speed and her failure to keep a good visual lookout.
Autonomous systems could ensure that, within a narrow channel, vessels proceed on the starboard side at pre-set maximum speeds. Modern manned vessels are already equipped with Electronic Nautical Chart Systems linked to speed and depth sensors, GPS and AIS. Implementing these systems to operate autonomously would allow Port Control to ensure that “safe speed” is observed.
Furthermore, thermal and infra-red cameras can identify objects the human eye cannot. While the Master of Ever Smart only made out Alexandra I after her deck lights were on, such cameras may have picked up her heat signature much earlier.
While autonomous vessels may reduce collision risk, developers are still faced with several difficult problems, particularly when manned, unmanned and autonomous vessels navigate the same waterways:
- How will developers determine if Rules 11 – 18 (Conduct of vessels in sight of one another) or Rule 19 (Conduct of vessels in restricted visibility) applies? If one vessel is manned, would the vessels be “in sight” of one another if the AI has infra-red vision? Alternatively, should Rule 19 be removed altogether because of advances in technology (e.g. better radars) on all ships?
- How will developers program discretion into the system, given that situations will always arise where it is best to deviate from the rules? As these situations cannot all be identified in advance, machine learning will be required, but effective machine learning requires sufficient data.
- Where would liability lie in a collision involving an autonomous vessel? Would developers be liable if it can be proven that faulty programming caused the collision?
- How should developers deal with ethical considerations, such as a choice between damage to the autonomous vessel and the loss of human life?
- What happens if an autonomous vessel suffers a catastrophic failure, such as a complete electrical breakdown? Would the vessel be “under command” for the purposes of the COLREGs and, if so, how would the vessel communicate this to nearby vessels?
Evergreen demonstrates that autonomous vessels may struggle with questions of law, and it may be necessary to review the COLREGs to remove uncertainty where possible. That said, no amount of redrafting can completely remove uncertainty, or the human element when deviating from its rules.
Nevertheless, Evergreen also demonstrates that two autonomous vessels may have been able to avoid the collision entirely, or reduce the damage suffered by the vessels, making the difficult questions of law redundant in the first place.
Source : riviera