Can AI Pass Engineering Classes? A Study Puts ChatGPT to the Test

Can AI Pass Engineering Classes? A Study Puts ChatGPT to the Test

Introduction

Imagine breezing through an entire semester of complex engineering coursework without lifting a finger—just copy-pasting questions into ChatGPT and getting perfect answers. Sounds like every lazy student's dream, right? But how close is this to reality?

A new study from the University of Illinois Urbana-Champaign takes this idea seriously, testing whether ChatGPT can independently tackle an entire semester-long control systems course. Spoiler alert: It didn’t ace the class, but it came impressively close—earning a solid B (82.24%), almost matching the class average of 85%.

This experiment sheds light on what AI can (and can’t) do in technical education, giving us insights into how engineering courses might need to evolve in the age of advanced AI tools. Let’s dive into the key findings!


Experiment: Can ChatGPT Survive an Engineering Course?

The researchers set out to answer a simple but bold question: If a student were to rely entirely on ChatGPT (GPT-4) to complete their coursework with minimal effort (literally just copy-pasting questions), how well would it perform?

The study focused on AE 353: Aerospace Control Systems, a junior-level engineering course involving:

  • Auto-graded homework (multiple-choice, coding tasks, numerical problems)
  • Midterm and final exams (math-heavy, hand-written solutions)
  • Programming projects (complex coding and technical reports)

Rather than fine-tuning prompts or providing detailed instructions, the researchers kept things realistic—just like a typical student might:

  1. Screenshot Uploads – Directly feeding images of questions into ChatGPT.
  2. Text-based Prompts – Converting questions into simple text without fancy formatting.
  3. Context-Enhanced Prompts – Adding minimal lecture notes before asking AI to solve problems.

This "lazy student" strategy was used throughout the semester, and the results were fascinating.


How Did ChatGPT Perform?

1. Homework: Almost as Good as Humans

ChatGPT excelled at structured, auto-graded homework assignments, scoring 90.38%, just a hair lower than the class average of 91.44%.

  • It nailed multiple-choice questions (MCQs) with high accuracy.
  • It did reasonably well on coding tasks, though its solutions were often clunky and inefficient.
  • It struggled with more complex numerical problems, where small misinterpretations led to cascading errors.

Takeaway: AI thrives in structured, clear-cut problems but starts sliding when things get tricky.

2. Exams: Surprisingly Strong—But With a Catch

On written exams, ChatGPT impressively scored 89.72%, even beating the class average of 84.81%. BUT, here’s the catch:

  • Auto-graded final exam questions (which were just variations of homework problems) were aced with a 97.4% score.
  • Hand-written midterm solutions were shakier at 86.5%, showing difficulty with deeper mathematical reasoning.

Additionally, when fed solutions from past midterms to assist on the final, the AI showed almost no improvement—meaning it struggles to apply past knowledge effectively.

Takeaway: While AI can mimic solutions well, it lacks genuine understanding and adaptability.

3. Programming Projects: A Major Weakness

ChatGPT’s biggest weakness was in long-term, complex projects, where it scored 64.34%—far below the class average of 80.99%.

  • Its Python code worked, but often inefficiently.
  • It failed in error handling, system integration, and optimization, essential for real-world engineering.
  • It overused overcomplicated jargon in technical reports, making them look smart but lacking depth.

Takeaway: AI-generated code “works” but often lacks finesse, optimization, and deeper engineering judgment.


What This Means for the Future of Engineering Education

1. AI Can Pass, But It Can’t "Think"

ChatGPT can mimic engineering knowledge, but it doesn’t truly grasp concepts the way a skilled student would. It answers questions based on pattern recognition but fails at innovation, troubleshooting, and deep intuitive reasoning—all essential traits of a great engineer.

2. Structured Assessments May No Longer Be Enough

Auto-graded assignments and multiple-choice exams are too easy for AI. If universities rely too much on these, students might simply use AI to pass without properly learning. Instead, educators may need to:

  • Use more open-ended, real-world projects that require human creativity.
  • Shift exams from “right answers” to explaining reasoning and design choices.
  • Focus on AI collaboration rather than treating it like a cheating tool.

3. AI Could Free Up Humans for Higher-Level Thinking

Rather than banning AI, universities could integrate AI into coursework, allowing students to offload tedious calculations and focus on what really matters—critical thinking, decision-making, and innovation.

Imagine engineers of the future using AI not just to solve problems but to ask better questions. That’s where education could (and should) be headed.


Key Takeaways

  • ChatGPT can pass an undergraduate control systems course with a B-grade (82.24%), nearly matching the class average (84.99%).
  • It thrives in structured problems like auto-graded homework and multiple-choice exams.
  • AI struggles with open-ended tasks like coding projects and written explanations requiring deep reasoning.
  • Auto-graded exams don’t effectively measure understanding, as AI can ace them without actually learning.
  • Engineering education must evolve, focusing on applied problem-solving, critical reasoning, and AI-aware assessment methods.
  • Instead of banning AI, universities should explore how to integrate it effectively to enhance education.

Final Thoughts: Should Students Use ChatGPT for Courses?

If you’re an engineering student thinking "Can I just use ChatGPT to pass my classes?", the answer is: Sort of, but it won’t make you a great engineer.

AI can help with tedious tasks, but real engineering requires creativity, intuition, and adaptability—things no AI currently possesses. If education adapts to focus on these uniquely human skills, students will graduate not just as engineers but as AI-powered problem solvers for the future.

So, instead of fearing AI, maybe we should ask: How can we use it to make learning better?

What do you think—should universities change their approach to assessments in the AI era? Let’s discuss in the comments! 🚀

Stephen, Founder of The Prompt Index

About the Author

Stephen is the founder of The Prompt Index, the #1 AI resource platform. With a background in sales, data analysis, and artificial intelligence, Stephen has successfully leveraged AI to build a free platform that helps others integrate artificial intelligence into their lives.