Reflections from the Inaugural AI Summit at Harvard

By Erin Kiesewetter

In October 2025, the Harvard Data Science Initiative (HDSI) and Harvard Data Science Review (HDSR) co-hosted the Inaugural AI Summit with Archerman Capital. The goal was simple yet ambitious: bring together scholars, technologists, executives, industry leaders, and students to explore the frontiers and future of artificial intelligence. 

As they opened the day  at the American Academy of Arts and Sciences, HDSI Faculty Director Francesca Dominici, HDSR Founding Editor-in-Chief Xiao-Li Meng, and venture capitalist Harry Archerman framed the day’s conversation around connection. Sitting there, I was excited to be a part of a crossroads moment  – between disciplines, between academia and industry, and between humans and their own inventions. Their vision  reminded me that as AI shapes our world, it is our responsibility to understand not only what AI is capable of and where it is taking us, but what it is teaching us about ourselves.

Navigating the Frontiers of Knowledge and Humanity

The opening session with physicist Peter Galison and economist Andrew Lo underscored how AI is reshaping both science and finance. Gallison discussed how machine learning can change the questions scientists ask, while Lo reflected on how investors must adapt to models that think differently from humans. Both cautioned against overconfidence in data, and models. True progress depends on maintaining space for human judgement, even as machines grow more capable. It reminded me that intelligence should not replace reasoning, but refine it.

The day then took a more reflective turn. A panel moderated by Xiao-Li Meng featured historian Stephanie Dick, philosopher Sean Kelly, and cultural theorist Doris Sommer.Dick traced the history of AI through three “acts”, reasoning machines, expert knowledge systems, and accurate predictions, showing how each stage shaped our notion of intelligence itself. Sommer posited that if we prioritize efficiency over process, we risk losing curiosity and the capacity for surprise, essential for both art and democracy. Kelly asked perhaps the most human question of the day: Who are we becoming alongside these tools? Listening, I felt that the humanities do not stand apart from AI. They complete the conversation, reminding us that technology isn’t only built, but imagined.

Connecting Innovation and Purpose

Founders, too, emphasized the connection between innovation, creativity, and purpose. Engineer Jim Keller joined Francesca Dominici and Harry Archerman for a fireside chat on creativity in hardware design. Drawing from his experience at Apple, AMD, and Tesla, he described chip design as “a mix of striving, heart, and institution.” His reflection on complexity – that no single person can now fully understand the systems we create – underscored the need for clarity and trust within teams. That same theme of trust carried into a later conversation between entrepreneur Bo Li and investor Becca Wood, who discussed how to build responsible AI systems. While Keller emphasized curiosity and fundamental learning, Li spoke of passion and intention: “If you do research you truly care about, impact will follow.” Both reminded me that ethical innovation rests not only on technical mastery but on sincerity of purpose.

Later, Arvind Jain, founder of Glean, echoed Keller and Li in the frame of entrepreneurial thinking. He described his journey as unplanned – only as the result of solving problems that he found meaningful. He encouraged the audience, “if there is a problem, you should believe you can solve it.” As someone early in my career, listening to Jain’s reflections on going back to basics, even as AI reshapes the workforce, felt especially relevant. Learning calculus, Jain emphasized, teaches precision, logic, and persistence, all qualities that no algorithm can replace. It was a comforting reminder that curiosity and critical thinking  are timeless assets.

Balancing Hype and Reality

Widening the focus to the international stage, investors Emma Norchet, Lucas Swisher, and Tomasz Tunguz joined moderators David Homa and Jackson Dean for a panel on AI’s impact on global markets. Again, they returned to the theme of trust. Among skyrocketing investments and shortened adoption cycles, they emphasized due diligence, model reliability, data provenance, and ethical responsibility. They balanced excitement for real-world AI with caution about hype and global decoupling. Even from an investor’s perspective, the conversation always returned to ethical use cases and transparency.

A later discussion, moderated by Archerman partner Tyler Flint, brought together Circle CTO Li Fan, Harvard CIO Klara Jelinkova, and Chief Product Officer at Datadog Yanbing Li to explore how organizations can move from ambition to action. Jelinkova described Harvard’s balanced approach to AI—not banning tools like ChatGPT or Gemini, but guiding their responsible use—while Fan and Li spoke about democratizing AI and empowering “AI natives” who ask “why.” That focus on purposeful adoption flowed naturally into entrepreneur Jonathan Corbin’s reflections on the shift from automation to autonomy. He described “human-in-the-loop orchestration” as a way to design systems that amplify rather than replace human creativity. Together, these conversations underscored that leadership in the AI era means balancing experimentation with accountability, and building technologies that adapt quickly without losing human intention or imagination.

Ultimately, It’s All Relational

The Summit closed with 2024 Turing Award winner Richard Sutton, the “father” of reinforcement learning. He described AI as emerging from experience, with the process of trying, failing, and learning. It was the perfect closing message, and after a day filled with questions about creativity, safety, and trust, his final reminder was grounding. Learning comes from engaging with the world and those around you, and while AI may learn faster, humans learn with meaning and authentic intention.

By the end of the day, one idea stood out: intelligence, artificial or not, is always relational. It grows between disciplines, between humans and machines, and between imagination and discovery. However, it can only thrive where its relation preserves ethical growth. As it turns out, the AI Summit, beyond algorithms and investments, was about rediscovering the human capacity for curiosity and care. To me, that is where the real intelligence lies.