Learning and interpreting the result

دوره: یادگیری عمیق با TensorFlow / فصل: Business case / درس 6

Learning and interpreting the result

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

Welcome back.

Throughout the entire course we claim tenser flow code is extremely reusable.

We will now prove this is in fact true.

Let me copy the amnesty model outline.

I’m sure you are already very familiar with this code.

We’re going to need a couple of adjustments though.

First the input size of our model must be 10 as there are 10 predictors.

Second the output size of our model must be two as our targets are zeros or ones.

What about the hidden layer size.

We can leave it as it is because we aren’t sure what the optimal value is.

Finally in the amnesty code we use the method flatten to flatten each image into a vector.

This time though we have already pre processed our data appropriately so we can delete that line altogether.

The rest remains unchanged.

We’ve got two hidden layers each activated by a really activation function.

We know that our model is a classifier.

Therefore our output layer should be activated with soft Max and that’s all.

Simple as that.

Next we choose the optimizer and the loss function like before.

Let’s copy paste that from the amnesty example.

The chosen optimizer is Adam.

While the loss is sparse categorical cross entropy.

We use this loss to ensure that our integer targets are 1 Hot encoded appropriately when calculating

the loss.

Once again we are happy with obtaining the accuracy for each buck.

Great.

Let’s finish the code on our own.

No more copy pasta.

We haven’t set to have our hyper parameters yet the batch size and the number of epochs.

Speaking of bad size we already said that in this example we won’t take advantage of it arable objects

that contain the data instead will employ simple arrays while the matching itself will be indicated

when we fit the model in a minute or two.

All right let’s set both the batch size and the maximum number of epochs to one hundred so that size

equals one hundred and number of epochs or Max epochs equals one hundred.

Next we simply fit the model.

So model dot fit brackets.

Let’s start with the train inputs and the train targets.

We could feed a 2 tuple object containing both of them as we did with the amnesty or we can feed them

separately.

To show you both approaches we already extracted the inputs and the targets into separate variables.

OK let’s continue inside the fit method.

We place the inputs first and then the targets.

So our first two arguments are train inputs and train targets.

Next we’ve got the batch size.

If you were dealing with arrays as we are now indicating the bad guys here would automatically batch

the data during the training process.

So let the argument that size be equal to our hyper parameter batch size.

What follows is the maximum number of epochs.

The argument of interest is called epochs and I’ll set it to the variable Max epochs regarding the validation

data.

There are two arrays of interest the validation inputs and the validation targets

finally let’s set verbose to two great let’s run the code and see what happens.

The result that we get is outstanding after 100 epochs of training.

We have reached a validation accuracy of around 91 to ninety two percent.

Now before we get too excited about that let’s think this through.

Why did our model train for all 100 epochs isn’t there a danger of over fitting after training for so

long.

Well yes precisely.

If we check the training process over time we’ll notice that while the training loss was consistently

decreasing our validation loss was sometimes increasing.

So it’s pretty obvious we have overfed fitted when we train the amnesty.

We didn’t really set an early stopping procedure here.

Once again we missed this step for the amnesty.

This was not really crucial.

If you remember the dataset was so well pre process that it would barely make a difference this time

though it does.

So our next lecture will be all about setting an early stopping mechanism.

Thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.