Model layout

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

متن انگلیسی درس

OK great.

We’ll start by loading the data from the NPC file we saved in the last lecture.

Let training data equal and P load T F intro dot NPC as you can guess we could have skipped this step

but it’s good to get used to loading your data from NPD as that’s how you’ll be usually provided with


OK next we’ll create two variables that measure the size of our inputs and outputs the input sizes too

as there are two input variables the Xs and the ZIS we saw earlier and the output size is 1 as there

is only one output y All right.

These two lines of code assign the values 2 and 1 to the variables input size and output size.

That’s programming you are already familiar with.

So nothing new so far.

No to Ben because here comes the first difference.

Unlike other packages where we have built in models when we are employing tensor flow we must actually

build the model.

So let’s build our model and store it in a variable called Model model will be equal to T F Kira’s sequential

Hold on.

Let’s stop for a second TAF stands for a tensor flow.

That’s clear.

What about Chris.

Well as we have previously discussed t F2 is based on cross.

So that’s the module needed.

Finally sequential is the function which indicates that we are laying down the model it takes as arguments

the different layers we’d like to include in our algorithm.

While we haven’t spoken about layers just yet the algorithm we’re building has a simple structure.

It takes inputs applies a single linear transformation and provides outputs.

These linear combinations together with the outputs constitute the so-called output layer.

All right.

You know from the minimal example with NUM PI that the outputs are equal to the dot product of the inputs

and weights plus the bias.


In fact there is another useful method here.

It’s called dense from TAF Kira’s layers the dense method takes the provided inputs and calculates the

dot product of the inputs and weights and adds the bias.

It is precisely what we wanted to achieve.

Therefore in brackets we must simply specify the output size.

We’ve already sorted in a variable so we can parameters our code by placing that variable as an argument.

That alone is completely enough for our model specification.


Now according to our theoretical framework we need data a model an objective function and an optimization


We’ve taken care of the data and the model and are left with the latter to the method which allows us

to specify them is called compile.

So model dot compile brackets in the brackets we include several different arguments.

The optimizer or the optimization algorithm we will use is abbreviated as S G D S G D stands for stochastic

gradient descent and as a generalization of the gradient descent concept we have already learned.

We will dive into the differences later in the course.

Don’t worry to add it as an argument we write optimizer equals quotation marks and the string name of

the optimizer we want to use now when using high level packages that require a string.

You’d want to check what you can actually include as a string.

If we go online and check TAF Kira’s optimizer is we’ll see a list of the names of different optimizer

is this part of the documentation is where you can check the exact name of the optimizer we want to


For this example that would be as G.D.

of course we’ll explore most other optimizer is later in the


OK the second argument will include is the loss function we want to make this example as close as possible

to our num pi minimal example.

So we have to use the L2 Norm loss scale by the number of observations.

In such cases good theoretical preparation comes in handy.

The L2 Norm loss is also known as the least sum of squares.

Moreover scaling by the number of observations is equivalent to finding an average or a mean looking

through the possible losses we discover means squared error and means squared error is precisely the

L2 Norm loss scale by the number of observations.

With that in mind let’s include the argument loss equal to means squared error.

Yep it looks like we’re done.

We’ve loaded the data outline the model and configure the learning process by selecting an objective

function and an optimization algorithm.

Well almost what we’ve got left is to indicate to the model which data to fit.

Similar to many other libraries tensor flowed to employees a fit method with two mandatory arguments.

The inputs and the targets let’s write model fit in the brackets.

We must specify the inputs that are contained in the inputs tensor from the variable training data and

the targets which are contained in the targets tensor from training data.

OK this same method is also the place where we set the number of iterations.

Each iteration over the full data set in machine learning is called an epoch.

So from now on we’ll use this term to describe iterations and number of iterations.

Now let’s set the number of epochs to 100 finally I’ll set verbose to zero and discuss it.

Once we actually run the code all right in the next lecture we’ll look into the result.

Thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.