Learning

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

We’ve reached the most exciting part of the machine learning process training.

This is where we fit the model we have built and see if it actually works so to speak first let’s create

a variable storing the number of epochs that we wish to train for.

I’ll call it num epochs and arbitrarily set it to five.

Next we can fit the model similar to our tensor flow intro.

We will use the fit method so model dot fit brackets First we specify the data.

In this case train data.

Second we set the number of epochs so epochs equals num epochs.

Note that we have parameter parameters it in a neat way so we can clearly inspect and amend the number

of epochs whenever we have hyper parameters such as the buffer size batch size input size output size

and so on.

We prefer to create dedicated variables that can be easily spotted when we find two and or debug our

code.

OK this alone would be enough to train the model.

However we also need to validate right.

It’s a good thing we’ve already prepared the validation data but we have to do now is included as an

argument that same method equal to the validation inputs and validation targets we created earlier.

Finally I’ll set verbose to two to make sure we receive only the most important information for e.g.

Bach.

Great.

Let’s briefly explain what we expect to happen behind the curtains at the beginning of each epoch.

The training loss will be set to zero.

The algorithm will iterate over a preset number of batches all extracted from the training set.

Essentially the whole training set will be utilized but in batches.

Therefore the weights and biases will be updated as many times as there are batches at the end of each

epoch.

We’ll get a value for the lost function indicating how the training is going.

Moreover we’ll also see a training accuracy thanks to the last argument we added then at the end of

the epoch the algorithm will forward propagate the whole validation data set in a single batch through

the optimized model and calculate the validation accuracy.

When we reach the maximum number of epochs the training will be over.

Great.

Let’s run the code

what we see are several lines of output.

First we have information about the number of the epoch.

Next we’ve got the number of batches it says five hundred and forty.

Out of five hundred and forty because if we had a progress bar that would fill out gradually the third

piece of information is the time it took for the epoch to conclude on my machine.

That’s around five to six seconds per epoch.

So far so good.

Next we can see the training loss.

It doesn’t make sense to investigated separately.

It should be compared to the training loss across epochs.

In this case it is mostly decreasing.

Note that it didn’t change too much because even after the first epoch we’ve already had five hundred

and forty different weight and bias updates one for each batch.

What follows is the accuracy the accuracy shows and what percent of the cases our output were equal

to the targets.

Logically it follows the trend of the loss.

After all they both represent how well the outputs match the targets.

OK finally we’ve got the loss and the accuracy for the validation dataset.

They are our check.

We usually keep an eye on the validation loss to determine whether the model is over fitting the validation

accuracy on the other hand is the true accuracy of the model for the epoch.

This is because the training accuracy is the average accuracy across batches.

While the validation accuracy is that of the whole validation set great to assess the overall accuracy

of our model we look at the validation accuracy for the last debunk.

For us it’s a ninety seven percent.

This is a remarkable result.

Already but can we do better.

Let’s try fiddling a bit with the model.

We can change many of the hyper parameters but I’ll start from the hidden layer size instead of 50 nodes

in each set in layer.

Let’s go for one hundred

Oh wow.

We’ve drastically increased the accuracy of our model.

Amazing.

Take a moment to appreciate this result.

Imagine somebody came to you a month ago and told you here are seventy thousand photos of handwritten

squiggly blotch crooked digits.

Write an algorithm that recognizes which digit has been written.

What would you think about that.

Probably you wouldn’t know where to start.

Now with just a few lines of code we’ve written an algorithm that gets ninety seven to ninety eight

out of 100 digits right.

This is a remarkable accuracy considering the simple model we used.

I’d like to conclude with a take of tensor flow dawg on the amnesty training they show a model with

an accuracy of ninety two percent and ask Is that good.

Their answer is not really.

In fact it’s pretty bad.

Our model was one bit more complicated than theirs and we achieved an accuracy of ninety seven point

five percent.

The question is can you do better check the next lecture for instructions on how to do that for homework.

Thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.