Understanding deep nets in depth

دوره: یادگیری عمیق با TensorFlow / فصل: Going deeper Introduction to deep neural networks / درس 3

Understanding deep nets in depth

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

متن انگلیسی درس

We already saw this picture right.

However I sense we didn’t spend enough time on it.

It is a fundamental illustration of deep ness.

Whatever we do we will come back to it as it is simple yet so insightful in this lesson we’ll spend

extra time examining it thoroughly.

The first layer you see is the input layer.

Each circle represents a separate input.

So in this graph we have eight inputs.

These inputs are the data we feed to train the model in the tense or flow framework.

This is the placeholder for inputs.

Remember the weather forecast example well imagine it has eight inputs.

For instance average temperature highest temperature lowest temperature humidity precipitation atmospheric

pressure cloud cover and visibility in the distance.

Now remember we combined these inputs linearly and then add nonlinearity how can we do that.

Well linearity is easy.

We’ll just use the good old linear model.

It’s inputs are the X.

And to combine them linearly we need weights in this example.

The weights are an 8 by 9 matrix one by eight times eight by nine will give us an object with the shape

of one by nine.

Therefore following this operation we will get a vector of length 9 or a 1 by 9 matrix.

This is exactly the number of hidden units we have in the first hidden layer.

OK all these arrows will get us from the inputs to the first hidden layer.

Each arrow represents the mathematical transformation of a certain value.

So a certain way is applied then a non-linearity is added know that the non-linearity doesn’t change

the shape of the expression it only changes its linearity.

OK so arrows represent weights and non-linearities right.

But how many weights are there the weights Matrix was eight by nine.

So there are 72 weights.

Well guess what.

We also have 72 arrows nine arrows go out of each and put unit and into a hidden unit.

Since we have eight input units this gives us 72 arrows to be even more specific.

Each weight has two index numbers.

The first one indicates the input it is referring to while the second one indicates the hit and unit

it is referring to.

For example weight 3 6 is applied to the third input and is involved in calculating the 9:54 unit in

the same way.

Weights 1 6 2 6 and so on until 8:06 all participate in computing the sixth hit and unit.

They are linearly combined.

And then nonlinearity is added in order to produce the 9:54 unit in the same way we get each of the

other hidden units.

All right then what.

Well then we have the first hit and we’re using the same logic we can linearly combine the hidden units

and apply a nonlinearity right.

Indeed this time though there are nine input hit in units and none output hit in units.

Therefore the weights will be contained in a 9 by 9 matrix and there will be 81 arrows.

Finally we applied nonlinearity and we reached the second hidden layer.

We can go on and on and on like this we can add a hundred hidden layers if we want.

That’s a question of how deep we want our deep net to be.

And of course we will talk about that later in the course.

Finally we’ll have the last hidden layer when we apply the operation once again we will reach the output

layer the output units depend on the number of outputs we would like to have in this picture.

There are four.

They may be the temperature humidity precipitation and pressure for the next day.

To reach this point we will have a nine by four Waite’s matrix which refers to 36 arrows or 36 weights

exactly what we expected.

All right as before our optimization goal is finding values for major cities that would allow us to

convert inputs into correct outputs as best as we can.

This time though we are not using a single linear model but a complex infrastructure with a much higher

probability of delivering a meaningful result.

OK I’m sure you know how powerful machine learning can be right.

At least that’s how I felt when I was introduced to this picture.

We’re doing great and we shouldn’t stop here.

There’s plenty of work and learning ahead of us.

After explaining the logic behind deep nets there is a tiny detail we still must discuss.

I bet you are wondering why we need those non-linearities after each linear combination to create the

next layer.

Well in the next lesson we’ll tackle this issue.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.