Human vs. AI – or is it?

A reflection on the 2025 HDSI Winter Workshop

By Erin Kiesewetter

On the morning of Friday, March 7, I sat at the kitchen table with my three roommates, discussing our co-op assignments for the day. One would be transcribing transcripts for a political journalism discussion; another would be placing orders for inventory; and another would be crunching numbers furiously in Excel. As for me, I shared that I’d been asked to write a blog post about the Harvard Data Science Initiative’s Winter Workshop. 

My roommates stared at me blankly. 

I explained that I’d been attending sessions on the future of artificial intelligence and building large language models. Head began to nod. My political journalism roommate then piped up, “Every day I am seeing more and more articles about how Massachusetts corporate and small business offices are prioritizing AI skills over degrees.”

This sparked a thirty-minute conversation on AI and its use in the classroom and workplace. How exactly does it work? Is it ethical to use? How can we use it safely? Why are so many professors against it? Does my boss think it’s cool? Are we going to be replaced by AI? 

These were just a few of the questions thrown around, but it made me think more about how specifically data and AI are going to shape the future, because it’s already a part of everyday life, whether people like it or not.

About the HDSI Winter Workshop

Since its launch, the Harvard Data Science Initiative (HDSI) has hosted annual gatherings showcasing emerging areas of data science from Harvard as well as collaborators from across the business and academic world. In 2024, the HDSI debuted a new Winter Workshop that focused on hands-on learning, inviting everyone – students, faculty, practitioners, leaders, and more – to grapple with tough questions and play with new tools.

This year’s program centered on AI and large language models (LLMs). It comes at a fitting time, especially with the increase and widespread use of generative AI. Over two days, expert speakers convened to discuss the infrastructure, implication, and impact of AI and LLMs in society. 

But what about my questions? Well, they were answered, for the most part anyways. I’ll explain through three key takeaways.

Key Takeaway 1 – The human work of AI is changing rapidly.

A group of people in a room

Description automatically generated

Pavlos Protopapas delivers a tutorial on building LLMs at the 2025 HDSI Winter Workshop.

A person standing in front of a large screen

Description automatically generated

Mauricio Tec leading “LLMs as Autonomous Agents” at the 2025 HDSI Winter Workshop.

I’m actually going to work my way backwards through the Winter Workshop sessions. Stay with me, though! Through two technical tutorials, “LLMs in Practice” (led by Pavlos Protopapas, a Lecturer at the Harvard John A. Paulson School of Engineering and Applied Science) and “LLMs as Autonomous Agents” (led by Mauricio Tec, a Research Associate at the Harvard T.H. Chan School of Public Health), I finally had the chance to learn more about how complex AI systems and models work. To paraphrase: we collect data, we transform data, we train data – in this case using Transformer architecture – to create the Base Model. We fine-tune the features we want, using techniques like Reward models and Reinforcement Learning with Human Feedback to help it learn human decisions and expectations. Once all models have been implemented, the system is “complete” but constantly needs to be monitored and maintained to ensure it performs as expected. 

What really caught my attention is that, for all the hype about AI taking over, it still requires people to fulfill its promise. But, this is already changing. A lot of the work that humans have done is now being automated, from the first step – data collection – to the last step – monitoring and maintenance.

For me, this opens a lot more questions about the evolving role of human intervention as AI systems become increasingly self-sufficient. How will the automation of tasks impact the quality and reliability of AI systems? What jobs are going to be replaced with AI and how is this going to affect new generations of workers? What forms of oversight should be developed to ensure AI remains ethical and aligns with human values? Where is the line between assistance and alienation? 

Key Takeaway 2 – Legislation and innovation often fight with each other. Starting from strong data governance could help. 

A person speaking into a microphone

Description automatically generated

Liz Grennan opens “Designing Governance for Trustworthy and Responsible AI”.

A person standing in front of a whiteboard

Description automatically generated

Gabriel Morgan Asaftei discusses balancing controls and innovation.

Luckily, some of my burning questions had been answered in a previous session, “Designing Governance for Trustworthy and Responsible AI”. The process I described above is deceptively straightforward: you gather a ton of data and play around with code. However, there are still pressing concerns with how exactly AI systems are structured. 

One particular issue is that AI systems and models are extremely complex, and this can make it difficult for legislators to fully understand the technology. By the time a piece of legislation is drafted, the technology might have advanced so significantly that the law may become outdated or irrelevant. For example, it is hard to tell where AI gets its information; a “works cited” section doesn’t exist (although this would make my life easier as a student.)

So where does the collected data come from? Who processed the data? What are their viewpoints on the data? Did they remove elements of the data that conflict with their personal opinions? How could this affect the model? What exactly is a desirable output if desire is subjective? Is your head spinning yet? Exactly. There’s a lot to consider.

In the session, Liz Grennan, Partner at McKinsey and HDSI Visiting Fellow, provided an approachable way to view AI systems through a governance mindset that guides the policies, procedures, and ethical considerations used to oversee the development, deployment, and maintenance of AI systems. The goal is to ensure they operate ethically and responsibly.

A key benefit is that when an AI system is built with strong, transparent governance, it reduces barriers for legislators, who currently require time to assess how the system works and its potential implications. Creating more transparent systems would speed the process entirely. More transparent systems would also pave the way for legislation that protects public interests without disrupting innovation. The session convinced me that we need for forward-thinking strategies around AI governance to prevent larger problems as a result of the technology. 

Key Takeaway 3 – Accountability will help build trust in AI.

A group of people sitting in chairs

Description automatically generated

From L to R, Thierry Coulhon, Gillian Hadfield, David Leslie, Garrison Lovely, and Lauren M.E. Goodlad separate fact from fiction during “The Great AI Debate.”

As a Business Administration Major, I think a lot about how AI is assisting in companies, like automating everyday and repeated tasks, uncovering patterns and trends in customer data, and providing personalized feedback via chatbots. While I think these are obvious business wins, companies are also beginning to take note of the risks, like data privacy, IP risks, job displacement, and overreliance on the systems themselves.

Today, many companies are left to self-regulate, making their own rules and ethical standards around AI use. Liz Grennan’s session also explored the nuances around this. Self-regulation offers flexibility and businesses can make AI work within their specific goals. But it’s obvious that without external oversight, accountability becomes complicated. Who decides that a company’s AI practices are fair?  How will we hold companies and individuals accountable for failures in AI systems? And crucially, how can we know we can trust AI?

I thought back to something I heard at the opening session, “The Great AI Debate,” led by David Leslie, Director of Ethics and Responsible Innovation Research at The Alan Turing Institute. The question of who is accountable remains unresolved, and this lack of clarity complicates how we build trust in AI.

It all boils down to this: Self-regulation gives companies the power to establish frameworks and mechanisms for AI usage, which is good, but it’s still related specifically to how that company uses AI. There’s a school of thought that creating laws around the ways AI is used, rather than on the development process, is a strong starting point for AI regulation on a mass level. It’s a first step toward regulations that will hold corporations accountable when it comes to the deployment of AI systems. Trust – not only in the systems themselves but the ways in which they are used – develops as a result.

But, to bring it full circle – I still think humans will be critical in the process of establishing accountability and trust in the AI we’re creating. For example, if we want to remove biases in data – which are, by the way, a very human reflection – someone has to decide to use diverse datasets to train their AI models. Someone must decide how those datasets are updated to capture changes in demographics and behaviors. Someone has to build the diverse team (of humans) whose differences of opinion can all be reflected in the AI output. This is the human involvement that will create greater accountability and ensure that AI stays aligned with our ethical standards and societal values.

So, what does it all mean?

Back to that Friday morning with my roommates in our kitchen. We had wondered out loud with each other about how data and AI are going to shape the future. Ultimately, attending the HDSI Winter Workshop expanded my understanding of how AI works, and how we’re going to grapple with the complexities it introduces. It made me see that AI is about more than just tech—it’s about the choices we make and the rules we set. While the AI revolution is happening at this very moment, I think we can balance this progress with ensuring that it creates positive change in the workplace and for society. For me, I know I’ll take forward my lessons learned to balance innovation with responsibility in my future career.


Erin Kiesewetter is a Program Assistant at the Harvard Data Science Initiative and a third-year student at Northeastern University working towards a Bachelor of Science in Business Administration.