Training and validation

دوره: یادگیری عمیق با TensorFlow / فصل: Overfitting / درس 3

Training and validation

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

I believe the previous lessons convinced you that overfitting is the real enemy when it comes to machine

learning.

We also said we will teach you how to deal with it.

That’s precisely what we’ll do in this lesson to prevent overfitting.

One must be able to identify it first.

Let’s start with that.

Usually we’ll be able to spot overfitting by dividing our available data into three subsets training

validation and test the first one is the training data set as its name suggests.

It helps us train the model to its final form.

As you should know that’s the place where we perform everything we’ve seen until now nothing is new

here since so far we thought all data is training data but we intentionally labelled the Python variables

training data instead of data.

In the exercises.

OK let’s check out the other two subsets.

The validation data set is the one that will help us detect and prevent overfitting.

Let’s see how that works.

All the training is done on the training set.

In other words we update the weights for the training so only every once in a while we stop training

for a bit.

At this point the model is somewhat trained.

What we do next is take the model and apply it to the validation data set.

This time we just run it without updating the weights so we only propagate forward not backward.

In other words we just calculate its loss function on average the last function calculated for the validation

set should be the same as the one of the training set.

This is logical as the training and validation sets were extracted from the same initial dataset containing

the same perceived dependencies.

Normally we would perform this operation many times in the process of creating a good machine learning

algorithm.

The two last functions we calculate are referred to as training loss and validation loss and because

the data in the training is trained using the gradient descent.

Each subsequent loss will be lower or equal to the previous one.

That’s how gradient descent works by definition so we are sure that treating loss is being minimized.

That’s where the validation loss comes in play at some point the validation loss could start increasing.

That’s a red flag.

We are overfitting we are getting better at predicting the training set but we are moving away from

the overall logic data.

At this point we should stop training the model.

Let’s illustrate this with the same example we used in the last lesson.

We start from an underfeeding position by increasing the complexity of the model.

We reach a very good model.

The training cost is going down the validation cost is moving accordingly.

At some point though we start overfitting as you can see the training loss is still decreasing while

the validation loss is increasing.

That’s when we should stop.

All right.

It is extremely important that the model is not trained on validation samples.

This will eliminate the whole purpose of the above mentioned process.

The training set and the validation set should be separate without overlapping each other.

Cool.

See you in our next lesson and thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.