Article

Regression vs Classification: The ML Models That Actually Drive Business Value

Mike McGee

Written By Mike McGee

Liz Eggleston

Edited By Liz Eggleston

Last updated December 9, 2025

Course Report strives to create the most trust-worthy content about coding bootcamps. Read more about Course Report’s Editorial Policy and How We Make Money.

Artificial intelligence may feel overwhelming in 2026, but inside real companies, the tools that drive business value are far simpler than the hype suggests. To help beginners understand what actually matters, we spoke with Artem Yankov – a former engineer at Zillow, Rover, PNNL, and Google, and now a mentor for Springboard’s Online Machine Learning Engineering and AI Bootcamp. Artem breaks down the two core machine learning models used across industry today, how companies evaluate whether an AI investment is worth it, and the early-career habits that set new practitioners up for success.

Meet Your Expert: Artem Yankov

Mike McGee: Artem, tell us about your background. How did you become a machine learning engineer?

Artem: I’m a mentor for Springboard’s Online Machine Learning Engineering and AI Bootcamp, but my path into ML started in a very different place. I began a PhD in nuclear engineering at the University of Michigan, and then the Fukushima accident happened. Almost overnight, many nuclear opportunities dried up, but my thesis work was already statistical and “machine learning–ish,” so I decided to pivot into data science.

I was largely self-taught – lots of books and a few Coursera courses – and I built a college basketball prediction model for March Madness as my first real project. I used that project to pitch myself directly to hiring managers and eventually landed an internship with an analytics firm in Seattle, which turned into my first full-time data role. Over time, as tools like AutoML started to abstract away a lot of model building, my work naturally shifted toward integrating models into data pipelines – that’s how I ended up as a machine learning engineer.

You’ve had some really interesting jobs!

Artem: I’ve definitely had a varied career. At Zillow, I worked on pricing the ad slots where real estate agents’ faces appear next to listings. At Rover.com – basically Airbnb for dog sitters – I built ranking algorithms to decide which sitters you see first after you enter your preferences.

I then moved into deep learning work at Pacific Northwest National Laboratory in the national security space, and later joined a startup called ClusterOne that focused on scaling machine learning models across large clusters. Most recently, at Google, I worked as a data engineer on the satellite imagery team for products like Google Maps and Google Earth, helping turn raw satellite images into something usable in those products.

Understanding the 2 Machine Learning Models That Actually Drive Business Value

When beginners enter AI, they often jump straight to the broad idea of “artificial intelligence.” But here’s the distinction I always make:

  • AI is the big, abstract vision of intelligent systems.

  • Machine learning is the practical subset – the part businesses actually use, deploy, and measure today.

And within machine learning, two model types power most of the systems you interact with: Classification and Regression, both of which fall under supervised learning.

Here’s how these models work, where they’re used, the KPIs that matter, and the common mistakes beginners should avoid.

Supervised vs Unsupervised Learning – The Foundation

Machine learning is generally divided into two categories:

Category What It Means Common Use Cases
Supervised Learning The model learns from labeled data – meaning you already know the correct answers. Fraud detection, churn prediction, home price estimation, medical diagnosis
Unsupervised Learning The model learns from unlabeled data – you’re looking for patterns without knowing the “right answer.” Segmentation, clustering, anomaly detection

Most real-world ML systems use supervised learning. And within that category, two models dominate industry work:

1. Classification Models. It takes input data and assigns it to a discrete bucket. Classification models predict categories like:

  • “fraud / not fraud”
  • “cancer / no cancer”
  • “churn / no churn”

Classification is the right choice when the key question is “Which group does this user or event belong to?”

  • At a bank: Is this transaction fraudulent?
  • In healthcare: Does this scan indicate cancer?

2. Regression Models. It uses input features to estimate a number. Regression models predict continuous numeric values.

Regression is ideal when the question is: “How much?” or “By how many?” Examples from my own experience:

  • Predicting home values at Zillow
  • Forecasting demand or revenue
  • Pricing real estate ad placements

How to Choose the Right ML/AI Model for Your Task

How do teams decide which type of model to apply?

It really depends on the business problem. Data scientists and ML engineers can get obsessed with algorithms, but at the end of the day, models exist to serve the business.

For example, at Zillow, the priority is accurately predicting home values, so regression models are core to their work. Other companies might rely more on classification if they’re predicting things like fraud, churn, or medical outcomes. The model isn’t chosen because it's “cool” – it’s chosen because it solves the problem that matters most to the business.

When you think about business value, what technical metrics or KPIs help prove the real ROI of an AI investment?

It’s a tricky question, because proving ROI is rarely straightforward. Most companies start with an A/B test: one group sees the new machine-learning–powered feature, and another sees the original version. Then you track business metrics such as customer acquisition cost, revenue per session, customer lifetime value, and churn reduction.

If the ML-exposed group shows statistically significant improvement, that’s a signal the model might add value. But even then, you have to weigh that gain against the cost of deploying and maintaining the model – retraining, monitoring for model drift, cloud compute costs, engineering time, all of it.

So even when a model “works,” it must outperform the operational cost of running it before it truly delivers ROI.

So if the business metrics look good and the team wants to push the model to production, what criteria determine whether it’s actually production-ready?

It comes down to comparing the model’s business impact with the real cost of running it. If a model, for example, cuts customer acquisition costs in half but requires expensive retraining, added engineering support, and significantly more compute, the benefit may not outweigh the operational burden.

Every business is different, but the bar is usually: Does the lift in key metrics clearly exceed the cost and complexity of maintaining this model over time? Models also introduce latency, which can hurt the customer experience, so that’s another factor. Even if a model performs well in testing, it needs to be reliable, affordable to operate at scale, and stable enough that it won’t degrade quickly once real users interact with it.

The Pitfalls of Using AI/ML on the Job

When can AI/ML be a hindrance to the business? 

Introducing an AI/ML model primarily impacts business implications and customer experience. A key concern is latency; models can slow down processes, negatively affecting customer experience, especially in fast-paced environments like online shopping, where speed is crucial for competitive pricing.

Scaling these models can also be expensive, particularly when using cloud providers. Furthermore, there is the issue of technical debt; once introduced, models require ongoing maintenance, retraining, and monitoring to ensure performance and prevent high latency in existing systems.

Say I'm an entry-level machine learning engineer. What are the most common technical pitfalls I should guard against?

Here are three common pitfalls in AI/ML model development:

  1. Forgetting Business Impact: Many practitioners, especially those with PhDs, become overly focused on technical details and algorithms, forgetting that machine learning must serve the business. The business value should always be the top priority.

  2. Seeking Technical Perfection: It's often better to launch a functional product quickly than to spend months striving for a slightly better model. A major pitfall for beginners is trying to build the most advanced algorithm immediately. Instead, start with the quickest baseline model – even a simple rule – to establish performance metrics. This is crucial because a complex deep learning model might offer minimal improvement over the simple baseline, potentially making it not worth the added complexity and compute. Tools like AutoML, especially from cloud providers like AWS or Azure, make establishing a baseline easier.

  3. Picking Incorrect Metrics: A frequent mistake is selecting the wrong performance metrics. For example, using "accuracy" to predict a rare event like cancer is misleading; a terrible model that always predicts "no cancer" will appear highly accurate. Choosing the correct, business-relevant metrics must be the first step before any model is built.

As we look ahead to 2026, what changes or trends do you anticipate emerging in the AI/ML space? Share your predictions for what's coming down the pipeline.

Many companies are struggling to extract real value from AI developments, similar to the "cold winters" deep learning experienced before the AI hype. Given the amount of money poured into the space without seeing ROI, and the current energy infrastructure limitations, I predict some air will be taken out of the bubble, suggesting an impending AI winter.

And if that happens, could it be an opportunity for companies to refocus on using AI more practically?

Artem: Absolutely. I think the most useful near-term role of AI will be helping businesses implement traditional machine learning techniques more effectively. You don’t have to deploy a giant AI model to see value. You can use AI as a helper – to design experiments, pick metrics, debug code, or understand how to integrate simpler models into your data pipelines.

This is why machine learning engineering skills still matter. The Springboard curriculum teaches that integration mindset – not just how to use AI, but how to embed models into real systems responsibly. That’s what companies need most right now.

How to Learn AI/ML in 2026

It feels easier than ever for anyone to deploy AI models with today’s tools. What's your message to beginners who feel like they can jump right in – and where should they actually start?

Kaggle competitions are one of the best places to begin. Kaggle hosts real datasets and real problems from companies, and you can experiment with building models, see how they perform against others, and learn extremely quickly.

The forums are also incredibly supportive – people share starter code, troubleshooting tips, and walk through their approaches. It’s a great way to build hands-on experience with practical machine learning.

Just keep in mind that the top Kaggle solutions are often too complex to ever be deployed in a real company. So treat Kaggle as a learning sandbox, not a template for production systems.

Springboard offers Machine Learning Engineering and AI bootcamps. Based on the students you mentor, who is this program best suited for?

Artem: Honestly, it can work for a wide range of people. I’ve mentored everyone from computer science undergrads who use the bootcamp to supplement their coursework, to mid-career professionals pivoting into something new, to managers who simply want to understand the machine learning work their teams are doing.

The students who tend to thrive have at least a little coding experience – even just a month or two goes a long way because the program is code-heavy. Software engineers looking to upskill often take on the most ambitious capstone projects, but I’ve worked with people from all kinds of backgrounds who’ve successfully completed the program.

What foundational math or technical concepts are essential for someone to be successful in this bootcamp? Do students need a strong math background?

Artem: There is a lot of math behind machine learning, but most of it is abstracted away – you don’t actually see it day to day. For most students, the math you need is pretty basic: addition, subtraction, understanding ratios, reading metrics, interpreting plots, and some light statistics like correlation or p-values.

You don’t need linear algebra or advanced calculus to succeed in the bootcamp. Even many AI breakthroughs today are built on relatively simple concepts, such as the chain rule from high school calculus. Unless you want to go deep into research or develop algorithms from scratch, a basic comfort with numbers is enough.

As we wrap up, what’s your final message for someone considering a career in AI or machine learning?

Artem: I’d say: don’t be intimidated, and don’t get distracted by the hype. The fundamentals of machine learning are still incredibly relevant, and companies still need people who understand how to integrate models responsibly into real systems.

Whether AI continues accelerating or cools off a bit, the need for engineers who can evaluate models, measure impact, and build clean data pipelines isn’t going anywhere. And with tools like Springboard’s bootcamp, you don’t need a PhD to get started – just curiosity, some coding practice, and a willingness to learn.

Artem, thank you so much. This was incredibly insightful – especially your clarity around business value and responsible implementation. For anyone interested in learning more about Springboard’s Bootcamps, you can read Springboard reviews on Course Report. This interview was produced by the Course Report team in partnership with Springboard.


Mike McGee

Written by

Mike McGee, Content Manager

Mike McGee is a tech entrepreneur and education storyteller with 14+ years of experience creating compelling narratives that drive real outcomes for career changers. As the co-founder of The Starter League, Mike helped pioneer the modern coding bootcamp industry by launching the first in-person beginner-focused program, helping over 2,000+ people learn how to get tech jobs, build apps, and start companies.


Liz Eggleston

Edited by

Liz Eggleston, CEO and Editor of Course Report

Liz Eggleston is co-founder of Course Report, the most complete resource for students choosing a coding bootcamp. Liz has dedicated her career to empowering passionate career changers to break into tech, providing valuable insights and guidance in the rapidly evolving field of tech education.  At Course Report, Liz has built a trusted platform that helps thousands of students navigate the complex landscape of coding bootcamps.

Also on Course Report

Find the path that fits your
career goals

Match with Bootcamps
Explore Courses

Sign up for bootcamp advice

Enter your email to join our newsletter community.

By submitting this form, you agree to receive email marketing from Course Report.