612.359.7600
333 South Seventh Street
Suite 2600
Minneapolis, MN 55402
Category | Briefing Papers
1. The Rise of the Design-Build Project Delivery
The prevalence of design-build as a project delivery method has rapidly increased over the past decade. According to several recent surveys by consulting firm FMI and the Design-Build Institute of America (DBIA), design-build is projected to comprise nearly half of all construction spending in the next year. The rapid growth of design-build as a more flexible, streamlined, and efficient project delivery method coincides with the rise of AI, and the promise of even greater efficiency and optimization. This article examines the legal risks associated with the use of AI in the design-build project delivery method.
2. Risks Associated with the Use of AI
Construction firms utilizing design-build methodology need to consider the risks associated with the use of AI. In particular, designers at construction firms need to recognize potential pitfalls with AI before adopting the use of AI in the design and construction process. The risks include, but are not limited to, the following:
a. Professional Responsibility
One of the principal outstanding issues with respect to AI is whether the use of AI will be required to meet the applicable standard of care for designers. There does not appear to be Minnesota case law on this issue as of the date of this publication. Design professionals should monitor current trends in their respective fields to help inform what uses of AI, if any, are pertinent to meeting the standard of care.
The American Institute of Architects published “The Architect’s Journey to Specification: Artificial Intelligence Adoption in Architecture Firms: Opportunities & Risks” in 2024. That study found that the majority of architects responding to the survey expect to use AI more in their day-to-day practice moving forward and approximately half think mastering AI will be important to their careers. Thus, within the architectural realm, this study indicates a trend towards the use of AI.
At least one industry group has published a statement on the use of AI by designers. The American Society of Civil Engineers issued Policy Statement 573 on July 18, 2024 regarding the use of AI. The Policy Statement provides, in part, that “AI has the potential to enhance efficiency, innovation, and sustainability in civil engineering practices. … However, AI cannot be held accountable, nor can it replace the training, experience, and (judgment) of a professional engineer in the planning, designing, building, and operation of civil engineering projects and the protection of the public health, safety, and welfare.” The ASCE has, therefore, signaled that AI is a helpful tool to supplement, but not replace, the work of a professional engineer.
Professional rules for design professionals can vary depending on practice area or geography. Some rules require designers to consider capabilities and limitations of emerging technologies. These rules imply that design professionals should at least consider how, if at all, AI can or should be utilized. Ultimately, regardless whether AI is used, design professionals still need to meet their obligations to act with reasonable care and competence in performing their work.
b. Security / Client Confidentiality
Client confidentiality and data security are core concerns when using AI. This includes concerns about work product or designs being captured by generative AI. Understanding the difference between open-source and closed systems is key to maintaining confidentiality over data. For example, confidential client information or work product put into an open-source system will not have any safeguards on who can use the data or how the data is used. As a best practice, designers should not put any confidential client data or secure information into an AI platform without first understanding what confidentiality safeguards are in place with the AI system being utilized.
c. Hallucinations and Inaccuracies
A hallucination is a response from AI that includes false or misleading information. A well-known example of a hallucination in the legal field comes from an attorney in New York who relied on ChatGPT to assist with research for a legal brief. ChatGPT simply made up multiple citations to cases, which the attorney cited in the legal brief. The attorney only discovered the cases did not exist after submitting the brief to the court. The likelihood of similar hallucinations in the design field – building code provisions, for example, or other applicable standards – should not be underestimated. Of course the frequency of hallucinations varies – some studies estimate hallucinations occur only one to three percent of the time. Other studies indicate hallucinations occur as often as 27% of the time. Regardless, given the well-known potential for hallucinations, checking information provided by AI for accuracy is crucial.
d. Inherent Bias
The use of AI does not eliminate unintentional or inherent bias. Unintentional bias in AI data can arise out of the data sets used to generate the content and algorithms. For example, hiring algorithms can inadvertently favor a certain demographic. Amazon experienced this issue with its (now scrapped) hiring algorithm that favored men’s resumes over women’s resumes. Errors can also arise from improper data sets being used – if data set is outdated, information generated from AI will also be outdated. One way to address inherent bias in the use of AI is to check the data sets used to generate the AI content to determine whether the underlying data is itself flawed. For designers, it is important to ensure AI algorithms are not over emphasizing certain variables or factors over others in analysis or interpretation of data.
e. Intellectual Property
The use of AI poses both ownership and use questions. Currently, there is no ownership over materials that are generated by AI – these materials are considered to be in the public domain. This can complicate designers’ claims of ownership where AI is used to generate designs. However, human involvement in the design process with editing, revising, or modifying AI generated material can help address those concerns. In addition, as recently decided in Thomson Reuters Enterprise Centre GMBH and West Publishing Corp. v. Ross Intelligence Inc. (D. Del. Feb. 11, 2025), the use of generative AI has the potential to infringe copyright if copyrighted materials are used improperly by an AI platform.
3. Implementing the Use of AI
Understanding the risks associated with the use of AI helps inform how AI can be implemented. Design-build firms utilizing AI need to have policies and procedures in place regarding the use of AI. In particular, designers should also clarify to whom the policies and procedures apply, including whether those policies and procedures flow down to subconsultants or other lower-tiers on a project. Given the rapidly changing landscape of AI, it is also imperative to stay informed of current laws, regulations, and policies relating to the use of AI and to update policies and procedures accordingly.
Ultimately, ensuring human oversight with the use of AI is key. AI is not a replacement for professional judgment and knowledge. The old adage of trust, but verify, applies to the use of AI.
Announcements
Robert L. Smith is speaking at the upcoming National Business Insitutes’s seminar “Constructing Clarity: Legal Paths When Change Orders Are Refused” Friday, May 30th at 12:00 PM Central Time. For more information, including details and how to register, click here.
Fabyanske, Westra, Hart & Thomson, P.A. is pleased to announce the election of its new President and Executive Committee. The following six attorneys now comprise the Fabyanske Executive Committee: Jesse R. Orman (President), Katie A. Welsch, Jeffrey W. Jones, Matthew T. Collins, Rory O. Duggan and Robert L. Smith.