Article Loss Prevention

Embracing AI in geo-engineering – the benefits and pitfalls

- by
Tags: Featured

It is safe to say that the age of Artificial Intelligence (AI) has arrived, whether wanted or not. It is probably unreasonable to avoid using AI as we look to the future. To many younger and upcoming professionals, AI is a ‘go to’ tool in the way a slide rule or log tables were to historical engineering pioneers. There are many opportunities to be had in the use of AI, but this is tempered by risks which need to be recognised and managed appropriately.

Within the geo-engineering industry, it is typically text based Large Language Models (LLMs) such as ChatGPT, or CoPilot that are most familiar. These are generative models and can be fine-tuned for specific tasks. They also acquire predictive power based on the data they are trained on. This data may or may not by accurate! There are other forms of AI being used in the industry, but with same pro’ s and con’s.

It is recognised that AI has the potential to be used to enable efficiencies in day-to-day processes. In simple terms, the ideal being that routine processes can be streamlined, freeing up time for innovative thinking and developing solutions. Such efficiencies may be achieved in, for example:

  • Reporting
  • Specifications
  • Meeting minutes
  • Correspondence
  • Presentations
  • Diagrams/ Drawings
  • Document management
  • Information searches
  • Data management/ analysis

As a result there has been an increase in reliance on AI in creating/ undertaking the above.  However, such benefits come with a ‘warning’.  All of the above still require ‘due care and diligence’ in the production process, without which Professional Indemnity insurance will likely be invalid….but what does reasonable ‘due care and diligence’ look like?  It also raises the question …At what point does the use of AI become a requirement of meeting the ‘due care and diligence’ obligation?

When a process relies on AI, it becomes even more essential that the appropriate due care and diligence is applied, as the liability for AI derived deliverables likely does not sit with the software or its originator, but more likely with the human individual/ organisation that adopts the AI output (although subject to testing through the Courts)…There is a significant risk in accepting AI generated output at face value, so what might appear to be a quick and simple solution could be a recipe for disaster if not properly managed and controlled.

If AI is to be adopted safely, as with all other processes, a quality system of checks and balances has to be in place in terms of accuracy and validity of the questions asked and the information retrieved. The more AI is relied upon, the more robust such quality checks have to be. As with all computer models..garbage in = garbage out.

Consideration is required of accuracy of the data used by the AI model, how that model has been trained and whether and how relevant are the data sources accessed, along with qualitative testing, repeat questioning and validation of results. The question must be asked… ‘Is this the right tool for the job?’

A possible solution may be amendment of standard forms of appointment to include specific clauses relating to the use of AI, by defining which AI tools are permitted for use and how their outputs must be verified. Furthermore, a clear allocation of liability is recommended for failures attributable to the AI tool, apportioning risk between the relevant project team parties and the Client.

There are also risks associated with confidentiality of data, both as a source and an output. What data is permitted to be input into models? Are there licensing and Intellectual Property issues associated with sharing of data? These must be considered.

A further consideration is that whilst there may be efficiencies in time/ resources, there is also an environmental impact from the use of AI models. The huge computing capacity required to power the AI models has a significant carbon footprint. For projects where carbon management and/or measurement is required, and may be a contractual requirement, the scope and scale of use of AI will need to be considered and included in carbon calculations.

(This article is based on the presentation given by Ben Gilson of Arup at the AGS Annual Conference 1/5/25 and “A Brave New Blueprint: The Legal and Contractual Quagmire” by Craig Roberts, Griffiths and Armour, 7/11/25).

Article provided by Jo Strange (AGS Honorary member)