Minimal example - part 3
دوره: یادگیری عمیق با TensorFlow / فصل: Minimal example - your first machine learning algorithm / درس 3سرفصل های مهم
Minimal example - part 3
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زوم»
فایل ویدیویی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی درس
متن انگلیسی درس
For those of you who skipped the previous lesson I’ll make a quick recap.
We have our input data which is in a 1000 by two Matrix in non linear algebraic terms.
This refers to a two variable problem with 1000 observations.
We also have our targets.
I will quickly plot the data so we see there is a strong linear relationship.
Here’s the 3D plot.
You don’t need to do that.
It’s just that simple linear problems are quite visual.
So we can afford to plot them.
If you download the notebook file you will see the same code with comments.
Feel free to inspect it in more detail.
All right let’s re-examine the linear model.
Why is equal to x times w plus b.
Our algorithm will try to find such values for W and B.
So the outputs y are closest to the targets.
Remember when we performed the gradient descent we started from an arbitrary number and then proceeded.
Well we must do the same thing now.
However this is tricky.
Conventionally we don’t want to start from an arbitrary number we choose.
Rather we randomly select small initial weights.
We will talk about that in more detail later.
For now let’s declare a variable called in it underscore a range and set it to 0.1.
That will be the radius of the range we’ll use to initialize the weights and the biases our initial
weights and biases will be picked randomly from the interval minus 0.1 to 0.1.
We will generate them as we did so far by using the random uniform method.
The size of the weights matrix is two by one as we have two variables so there are two weights one for
each input variable and a single output.
Let’s declare the bias and illogically the appropriate shape is one by one.
So the bias is a scalar in machine learning.
There are many biases as there are outputs.
Each bias refers to an output.
If you recall the example we saw earlier about apartment prices and apartment rent.
It involved two biases as there were two outputs.
I’ll print the weights and the biases so you see how they look like they are small and close to zero.
These are the weights and this is the bias All right.
Finally we must design a learning rate which we denoted with eata earlier.
I’ll simply select the value of 0.02.
I found this learning rate useful for this demonstration for homework you will have to play around with
it so you can see how different learning rates affect the speed of optimization.
So we are all set.
We have inputs targets and arbitrary numbers for weights and biases.
What is left is to vary the weights and biases so our outputs are closest to the targets as we know
by now.
The problem boils down to minimizing the loss function with respect to the weights and the biases.
And because this is a regression we’ll use one half the L2 norm loss function.
Okay great.
Next let’s make our model learn.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.