Setting an early stopping mechanism

دوره: یادگیری عمیق با TensorFlow / فصل: Business case / درس 7

Setting an early stopping mechanism

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

Hi in this lecture we’ll explore how to set up an early stopping mechanism with tensor flow.

The fit method contains an argument called callbacks

callbacks are functions called at certain points during model training.

Fortunately there are many different readily available callbacks.

You can draw your training process and tensor board.

You can stream the results into a CSB file or a server.

Save the model after each epoch.

Adjust the learning rate in weird ways and these are just some of the options.

You can also define any custom callback you may want to use.

However the one we’ll focus on is early stopping and no wonder early stopping is the definition of a

utility called at a certain point during training.

Each time the validation loss is calculated it is compared to the validation loss one epoch ago.

If it starts increasing the model is overfishing and we should stop training OK.

Since the early stopping mechanism is a hyper parameter in a way let’s declare a new variable called

early stopping which will be an instance of TAF Kira’s callbacks early stopping as you can guess.

There is a readily available structure we can use.

So all we need to take care of are the particulars of this early stopping mechanism by default this

object will monitor the validation loss and stop the training process the first time the validation

loss starts increasing.

All right now that we’ve got our early stopping mechanism it’s time to implement it in our training

process as we suggested we should add a callbacks argument to the fit method equal to a list of callbacks

in our case.

This list will have a single element.

Our early stopping variable.

Let’s retrain the model.

We can see that the new training lasts for less than 20 epochs and the final accuracy of the model is

around 90 percent.

Obviously the first time we trained our model we had overfished.

The result was around 2 percent higher.

And we can attribute this solely to overtraining it.

Now if we examine the validation loss we’ll notice that the first time it increase was during the last


Moreover it increased only slightly sometimes if we noticed that the validation loss has increased by

an insignificant amount.

We may prefer to let one or two validation increases slide to allow for this tolerance.

We can adjust the early stopping object.

There is an argument called patience which by default is set to zero there.

We can specify the number of epochs with no improvement after which the training will be stopped.

It’s a bit too strict to have no tolerance for a random increase in the validation loss.

Therefore let’s set the patients to 2.

This way we’ll be completely sure when the model has started to over fit all right.

With this adjustment already implemented we can rerun the code depending on your problem and dataset.

The difference may not be crucial however this is yet another debugging tool of sorts that you have

at your disposal.

This time the accuracy is between 90 and 91 percent definitely worse than our overfed model but slightly

better than the one with no patients.

What about its interpretation.

Well with your extensive knowledge you can easily interpret the result.

So there’s no need to spend too much time on that now.

There are several important outcomes I’d like to point out and they are all conceptual rather than quantitative.

First the final validation accuracy of the model is around 90 percent.

The priors were 50 percent which means our machine learning algorithm definitely learned a lot.

It managed to classify around 90 percent of the customers correctly.

Sounds great right.

In other words if we’re given 10 customers and their audiobook activity we’ll be able to correctly identify

future customer behavior of nine of them.

But how does this help us in practice.

We can use this information for what we intended to.

We can focus our marketing efforts only on those customers who are likely to convert again.

Take a second to reflect on this amazing discovery.

In the beginning we started with a bunch of raw data which did not make a whole lot of sense to anyone

who was not in this line of business.

Moreover many variables were binary and there were lots of missing values.

Even the orders of magnitude had nothing in common.

If we were to manually explore the data and give an educated guess on whether a customer would convert

again it is likely that the result would be as good as blind guessing My personal bet is that a human

would do even worse than the latter with some descriptive statistics and regression models.

We’d get a better results but I assure you it wouldn’t be very impressive and you would definitely need

a lot more than 30 minutes we spent in order to create this model.

It is extremely hard to predict human behavior and most of the time it’s even counter-intuitive.

However the machine learning algorithm we created here is a new tool in your arsenal that has given

you an incredible edge.

Moreover using the algorithm is a skill you can easily apply in any business out there.

So basically what you did is leverage the power of artificial intelligence to reach a business insight.

Congratulations and great work.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.