Select the loss and the optimizer

دوره: یادگیری عمیق با TensorFlow / فصل: The MNIST example / درس 7

Select the loss and the optimizer

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

Welcome back.

We’ve taken care of the data and the model.

Now let’s proceed with the next essential steps similar to our tensor flow intro lecture.

We must specify the optimizer and the loss through the compile method we call on the model object.

So let’s write model dot compile.

We start by specifying the optimizer we know that one of the best choices we’ve got is the adaptive

moment estimation or Adam in short.

As you may recall tensor flow allows us to use a string to define the optimizer to select the atom optimizer.

We simply write atom by the way these strings are not case sensitive so you can capitalize the first

letter or all letters if you wish.


What about the loss function.

Well we’d like to employ a loss that is used for classifiers price.

Entropy would normally be our first choice.

However there are different types of cross entropy in tensor flow too.

There are three built in variations of across entropy loss.

They are binary cross entropy categorical cross entropy and sparse categorical cross entropy logically

binary cross entropy refers to the case where we’ve got binary encoding so we won’t be choosing this

one categorical cross entropy and sparse categorical cross entropy our equivalent with the difference

that sparse categorical cross entropy applies one hot encoding to the data is our data one hot encoded

Well that was not a pre processing step we went through however the output in the target layer should

have matching forms our model and optimizer expect the output shape to match the target shape in a one

hot encoded format this means we should opt for the sparse categorical cross entropy.

All right.

Finally we can add a third argument to compile.

We could include metrics that we wish to calculate throughout the training and testing processes.

Typically that’s the accuracy.

So let’s add it here.


We are in good shape to train our model.

Thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.