# Outline the model

/ / درس 6

### توضیح مختصر

• زمان مطالعه 0 دقیقه
• سطح خیلی سخت

### دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید ### فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

### متن انگلیسی درس

So far we have loaded and pre processed our raw data.

The next step is to outline the model.

So consider pausing the video here if you feel like you need to revisit the introduction of tensor flow.

We showed a few lessons ago.

This way it will be easier for you to see the parallels between the two exercises and you’ll know what

to expect.

OK let’s start by drawing a picture that follows the logic as we explained in the previous videos.

There are seven hundred and eighty four inputs.

So that’s our input layer.

We have ten outputs nodes one for each digit.

We will work with two hidden layers consisting of fifty nodes each.

As you may recall the width and the depth of the net are hyper parameters.

I don’t know the optimal width and depth for this problem but I surely know that what I’ve chosen right

now is suboptimal.

Anyhow in your next homework you’ll have the chance to fine tune the hyper parameters of our model and

obtain an improved result.

All right let’s declare three variables for width of the inputs outputs and hidden layers the INPUT

SIZE IS seven hundred and eighty four the output size is 10 as we have 10 digits and the hidden layer

size is 50.

The underlying assumption is that all hidden layers are of the same size.

Alternatively you can create hidden layers with different width and see if they work better for your

problem.

Next we must define the actual model.

Once again we will store it in a variable called Model model is equal to TAF Kira’s sequential open

brackets and square brackets.

The first layer is the input layer.

Our data is such that each observation is 28 by 28 by one or a tensor of rank 3.

As we already discussed since we don’t know CNN IDs we need to flatten the images into a vector.

Note that this is a common operation and deep learning so there is a dedicated method called flatten

flatten is a part of the layers module and takes his argument the shape of the object we want to flatten

it transforms it or more specifically flattens it into a vector.

Therefore we write TAF Kira’s layer flatten and indicate the input shape or twenty eight by twenty eight

by one.

Thus we have prepared our data for a feed forward neural network of the same kind that we’ve discussed

so far in the course.

The next step is building the neural network in a very similar way to our tensor flow intro model.

We employ TAF Kira’s layers dense to build each consecutive layer.

Let me remind you that TAF Kira’s is layers dense was basically finding the dot product of the inputs

and weights and adding the bias.

Now let’s build on that.

It can apply an activation function to this expression.

This is precisely what we’ve discussed so far.

Theoretically it’s time to see how it is implemented.

TMF Kira’s layers dense takes his arguments the output size of the mathematical operation.

In this case we are getting from the inputs to the first hidden layer.

Therefore the output of the first mathematical operation will have the shape of the first hidden layer

as a second argument.

We can include an activation function.

I’ll go for re Lou because I know it works very well for this problem.

In practice each neural network has a different optimal combination of activation functions.

Does it matter for the amnesty.

Well you’ll find that out on your own for homework.

OK we finished the line off by placing a comma.

So far we’ve got our flattened inputs and the first hidden layer we can create the second hidden layer

in the same way.

TMF Kira’s layers dense and in the brackets we specify the size of the second hidden layer and the activation

function.

Once again they’ll be the variable hidden layer size and the rectified linear unit or redo that’s it.

As you can see outlining the model is a child’s play.

You can stack as many layers as you like one after the other.

Using this structure but that’s also part of the homework.

The final layer is the output layer.

It is no different in terms of syntax.

As you’ve probably guessed we use the dense to create it.

This time though we specify the output size rather than the hidden layer size.

As we can see from the diagram What about the activation.

Well from the theoretical lessons you know that when we are creating a classifier the activation function

of the output layer must transform the values into probabilities right.

Therefore we must opt for the soft Max.

All right.

That’s all our model has been built in our next lesson.

We will proceed to the next step.

The optimization algorithm.

Thanks for watching.

### مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.