Kinship
Request a demo

Autodesk University 2024 – Californ.ai Dreamin'

AI was the obvious hot topic at Autodesk University 2024. But there were some unexpected takeaways from the projects on display.

I spent last week in San Diego at Autodesk University, where I had the chance to meet with dozens of customers, partners and industry peers. Just a few weeks earlier, it was touch-and-go whether I would make it. With our two kids still on the tail end of a mid-term break (we didn’t have these breaks when I was a kid!), my wife in the final trimester with our third, and my daughter’s birthday the weekend after the conference, it seemed almost impossible to go away for a week.

In the end, I decided I had to be at the conference and managed to compress my travel to the shortest time possible. I arrived in San Diego just a couple of hours before the opening welcome reception on Monday, and I left just as the closing party started on Thursday. While it was a bummer to miss Counting Crows performing in downtown San Diego, I made it back in time to pick up balloons and cake for the birthday party.

As I boarded the plane from San Francisco back to Hong Kong at 1:00am on Friday, I knew I had made the right decision going to AU. Three and a half days in San Diego felt like several month’s worth of interaction, information and inspiration.

Among the most memorable topics of discussion at the conference was the use and impact of AI. On the one hand, this came as no surprise. I mean, can you even have a software conference in 2024 without putting AI at the center of it? On the other hand, there were a couple aspects of the AI buzz that I had not fully anticipated and that left a real impression on me.

In-House AI

One was the degree to which AEC firms are bringing, or have already brought, AI in house. Perhaps this shouldn’t be surprising, given the trend in recent years for firms to establish their own internal development teams. It’s only natural to have such teams spend time exploring what AI can do, and LLMs seem particularly well-suited to the ongoing problem of how to leverage the massive amounts of semi-structured project data that every firm is sitting on.

But from the examples I saw at the conference, many firms are already past the exploratory stage and have developed and deployed fully functional AI tools. Whether it’s a chatbot that allows teams to query vital but difficult-to-reach project information, or an interface for generating on-demand analytics across a portfolio of projects, these are tools that are already adding real value to the design and delivery of projects.

Personally, I think in-house AI is a fantastic development for the industry. First, because firms have so much data on their hands that they ought to get more value from. And second, because, to some degree or another, every firm has its own particular way of doing things and evaluating outcomes. So instead of being limited to a particular software supplier’s viewpoint, or needing an unwieldy amount of customizability from commercial software, firms can now use AI to layer their own unique methods and perspectives on top of more standardized tools and services.

All About the Wrapper

The other thing that caught my attention at AU was the level of work being done at the “wrapper” layer of AI tools. When the first wave of AI products and services started popping up after the initial release of ChatGPT, there was an oft-repeated criticism that such solutions were merely a “wrapper” around the LLM. The gist of the critique was that those products and services didn’t bring any real value of their own – the value was really in the LLM – and that they could be easily replicated – because anyone could build such a wrapper.

As this recent article from Sequoia Capital points out, the “just a wrapper” critique has turned out to be both right and wrong. While it’s true that anyone can build a wrapper around an LLM and offer that as a product or service, it’s not at all the case that the wrapper is value-less or trivial to build.

Instead, it turns out that the wrapper is essential for transforming the raw potential of the underlying LLM into actual value – through prompt engineering, application logic, Retrieval-Augmented Generation (RAG), and other engineering work that structures, controls, extends and validates the LLM’s processing and output for specific use cases. And it turns out that building a high quality wrapper takes a lot of work!

At AU, I was struck by the degree to which the projects on display evidenced Sequoia’s take on AI products. Firms are putting in significant work to craft a robust wrapper for their AI projects, by giving the LLM access to their specific resources, training it on how to interpret those resources, building logic for how the LLM should act on different kinds of prompts, and so on. And this is where the magic really happens – where the AI project goes from a novel plaything (something that wows but can’t be taken too seriously) to a breakthrough solution (something that reliably solves a problem in a way previously unimaginable).

California Dreamin’

There were multiple reasons that I left San Diego with a heightened sense of the future possibilities for Kinship and our industry as a whole. I had a ton of energizing conversations with our customers, we got to spend time as a team brainstorming on new ideas we encountered during the conference, and looking out onto the Pacific in southern California always lends itself to an expanded imagination. But what I saw at AU around AI applications was exciting enough to make even the most cynical voices in AEC technology willing to dream again. I can’t wait to return to AU in Nashville next September to see how much closer those dreams are to becoming a reality.

You might also like

Get our monthly newsletter

Sign up for our monthly newsletter to receive product updates, Revit tips and other resources to help you get more out of Revit with less time and effort.