What is a deep net
دوره: یادگیری عمیق با TensorFlow / فصل: Going deeper Introduction to deep neural networks / درس 2سرفصل های مهم
What is a deep net
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زوم»
فایل ویدیویی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی درس
متن انگلیسی درس
We said we will talk extensively about layers.
Time to keep our promise Here’s arguably the most common pictorial representation of deep neural networks
this is our first layer.
It is called the input layer.
That’s basically the data we have.
We take the inputs and get outputs as we did before.
The main rationale behind neural networks however is that we can now use these outputs as inputs for
another layer and then another one and another until we decide to stop the Last Lear we build is the
output layer.
That’s basically what we compare the targets to.
All right.
So the first layer is the input layer and the last layer is the output layer.
All the layers between are called hidden layers we call them hidden.
As we know the inputs and we get the outputs but we don’t know what happens between as these operations
are hidden stacking layers one after the other produces a deep network or as we will call it a deep
net.
The building blocks of the hidden layer are called hidden units or nodes.
Here’s a hidden unit in mathematical terms if h is the tense or related to the hidden layer each hidden
unit is an element of that tensor.
The number of hidden units in a hidden layer is often referred to as the width of the layer usually
but not always we stack layers with the same with so that the we’re with is equal to the width of the
entire network OK.
We saw how wide a deep network is.
Let’s examine how deep it can be.
Depth is an important ingredient as it refers to the number of hidden layers in a network.
When we create a machine learning algorithm we choose it’s width and depth we refer to these values
as hyper parameters hyper parameters should not be mistaken with parameters.
Recall that parameters were the weights and the bias’s hyper parameters are the with depth learning
rate and some of the variables we will see later.
The main difference between the two is that the value of the parameters will be derived through optimization
while hyper parameters are set by us before we start optimizing.
All right.
This will do for now.
Thanks for watching.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.