Testing the model

دوره: یادگیری عمیق با TensorFlow / فصل: The MNIST example / درس 9

Testing the model

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

Pie I hope you have played around with a model of it.

Hopefully you managed to create a better model than the one we showed.

Either way I am sure you had a lot of fun.

I guess you were repeatedly hitting above the ninety seven point five percent mark.

Maybe some of you even reached 98 percent but does that mean that the model was 98 percent accurate.

If I’m asking that’s obviously a tricky question.

No it doesn’t.

That’s the validation accuracy.

We must still test the model on the test data set because the final accuracy of the model comes from

forward propagating the test dataset not the validation.

The reason is we may have over fit but didn’t we already deal with overfishing.

That’s a fine point as some of you could miss the difference between validation and test datasets.

So let’s clarify that.

We train on the training data and then validate on the validation data.

That’s how we make sure our parameters the weights and the biases don’t over fit.

Once we train our first model though we fiddle with the hyper parameters normally we won’t change only

the width of the hidden layers.

We can adjust the depth the learning rate the batch size the activation functions for each layer and

so on.

You’ve probably done it all.

Each time we make a change we run the model once more and check out the validation accuracy improved

after 10 to 20 different combinations.

We may reach a model without standing validation accuracy.

In essence we are trying to find the best hyper parameters but what we find are not the best hyper parameters

in general.

These are the hyper parameters that fit our validation data set best basically by fine tuning them.

We are over fitting the validation data set.

Let’s elaborate a bit during the training stage.

We can over fit the parameters or the weights and biases the validation data set is our reality check

that prevents us from over fitting the parameters.

After fiddling with the hyper parameters we can over fit the validation data set as we are considering

the validation accuracy as a benchmark for how good the model is.

The test data set then is our reality check that prevents us from over fitting the hyper parameters.

It is a data set.

The model has truly never seen.

Well let’s test the model then we can assess the test accuracy using the method evaluate.

So if we write model evaluate test data we will be forward propagating the test data through the net

with our current model structure.

There would be two outputs the loss and the accuracy.

The same ones we had in the training stage to make it clearer.

I’ll store them in Test loss and test accuracy.

Let’s run the code.

Nothing comes out as we still haven’t displayed them.

Let’s print the results using some nice formatting.

Here’s the result.

Our model has a final test accuracy around ninety seven point five percent.

This is also the final stage of the machine learning process.

After we test the model conceptually we are no longer allowed to change it.

If you start changing the model after this point the test data will no longer be a data set.

The model has never seen.

You’d have feedback that it has around ninety seven point five percent accuracy with this particular

configuration.

The main point of the test dataset is to simulate model deployment if we get 50 percent or 60 percent

testing accuracy.

We will know for sure that our model has over fit and it will fail miserably in real life.

However getting a value very close to the validation accuracy shows that we have not over fit.

Finally the test accuracy is the accuracy we expect to observe if we deploy the model in the real world.

Great.

This was our last lecture on the M.A.

data in the next section.

We will take a real dataset pre process it and solve a business case.

Can’t wait to see you there.

Oh and thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.