Underfitting and overfitting

دوره: یادگیری عمیق با TensorFlow / فصل: Overfitting / درس 1

Underfitting and overfitting

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

متن انگلیسی درس

One of the most commonly asked questions that does science interviews is about overfitting.

A recruiter will probably bring up the topic and will ask you what’s overfitting And how do we deal

with it.

Fortunately in this lesson we’ll address this issue and you won’t be surprised when it comes up.

There are two concepts that are interrelated.

Underfeeding and overfitting they go together.

Understanding what helps us understand the other and vice versa.

OK.

Broadly speaking overfitting means our training has focused on the particular training set so much it

has missed the point.

Underfeeding on the other hand means the model has not captured the underlying logic of the data.

It doesn’t know what to do and therefore provides an answer that is far from correct.

Let’s explain this with graphs as it is much more intuitive and cooler.

First we will look at a regression and then we’ll consider a classification problem.

Here we can see several data points following the blue function with some minor noise a good algorithm

would result in a model that looks like this.

It is not perfect but it is very close to the actual relationship.

OK we can certainly say a linear model would be an underfeeding model.

It provides an answer but does not capture the underlying logic of the data.

It doesn’t have strong predictive power.

It’s kind of lame under fitted models are clumsy.

They have high costs in terms of high loss functions and their accuracy is low.

You quickly realize that either there are no relationships to be found or you need a more complex model.

All right let’s check in overfitting model here it is now that you see this picture we can say overfitting

refers to models that are so super good at modeling the training data that they fit or come very near

each observation.

The problem is that the random noise is captured inside and overfitting model.

Let me elaborate on this a little further with an example.

Imagine you are trying to predict the euro dollar exchange rate based on 50 common indicators.

You train your model and get low costs and high accuracies.

In fact you believe with ninety nine point ninety nine percent accuracy you can predict the exchange

rate confident with your machine learning skills.

You start trading with real money.

Unfortunately most orders you place fail miserably in the end you lose all your money because you trusted

the amazing model so much.

What happened with your model is it probably overfit the data it was trained to explain the training

data so well that it missed the point.

Instead of finding the dependency between the euro and the dollar you modeled the noise the noise in

this case consists of the random decisions of the investors participating in the market at that time.

So shouldn’t the computer be smarter than that.

You may ask well we explain the training data well so our outputs were close to the targets.

The last function was low and the learning process worked like a charm in mathematical terms.

However once we go out of the training set and meet the real life situation we see our model is actually

quite bad first rule of programming states that the computer is never wrong.

It is us who made a mistake.

We must keep issues as overfitting in mind and take care of them with the appropriate remedies.

As a whole overfitting can be quite tricky.

After seeing this beautiful graph you probably believe you can spot an overfitting problem but remember

that in the 4 x example there were 50 indicators which means we need a 51 dimensional graph and I don’t

know about you but my senses work in three dimensions only.

Don’t worry we will explore how to deal with overfitting in the next few lessons.

But first we will look at a classification example.

Stay tuned.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.