سرفصل های مهم
Cutomizing your model
توضیح مختصر
- زمان مطالعه 0 دقیقه
- سطح خیلی سخت
دانلود اپلیکیشن «زوم»
فایل ویدیویی
برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.
ترجمهی درس
متن انگلیسی درس
Welcome back.
In our number nine neural network we had several decisions to make.
First we had to select the best way to initialize the weights.
Back then we chose the starting points of the weights and biases to be random numbers between minus
zero point one to zero point one.
Here we left the default tensor flow settings do their magic.
In fact if we want to make this example as close to the original as possible we can set a random uniform
initialized or where we defined the layer instead of having a single argument output size.
We can also add a kernel initialize our and a bias initialize her kernel here is the broader term for
a weight.
Okay so let Col initialize her be equal to T F random uniform initialize her from minus zero point one
to zero point one.
Similarly bias initialized would be equal to the same expression great naturally.
You can specify other ways in which you want your weights and bias to be initialized but more on that
in later sections what else are we explicitly specify in the number by example.
Well the learning rate the learning rate is an integral part of the optimizer.
Here we took the default stochastic gradient descent or as G D.
In fact we can create a variable called Custom optimizer equal to TAF Kira’s optimizer is as G d and
specify several arguments for now.
The only one we know is the learning rate.
So let’s set it to zero point zero two as we did in the number by example.
Next we should replace the string SGI D and model and compile it with our new custom optimizer the new
result would be practically the same with a small difference that we get to set the learning rate ourselves
here.
It’s a good idea to note that you can always refer to the tensor flow documentation and figure out how
to customize your model.
OK so far so good.
Now the only thing we have left is the loss since matters are much more complicated in this course we
will use the built in losses without customizing them in the future.
You may get hooked on neural networks and want to try out new lost functions.
Rest assured it can be done.
However it’s rarely worth the trouble.
All right.
Let’s train our model with these tweaks.
Unsurprisingly the results are no different.
Our weights and biases are also practically what we wanted them to be.
The outputs are as close to the targets as they used to be which is evident from their arrays as well
as the plot.
OK as we progressed through the course we will look into each building block in more detail from layers
through optimizer is learning rate schedules initialization and much more.
Can’t wait to see you there.
Thanks for watching.
مشارکت کنندگان در این صفحه
تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.
🖊 شما نیز میتوانید برای مشارکت در ترجمهی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.