Interpreting the result and extracting the weights and bias

دوره: یادگیری عمیق با TensorFlow / فصل: TensorFlow - An introduction / درس 6

Interpreting the result and extracting the weights and bias

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

Hi again.

Let’s pick up where we left off.

We’ve got all the code needed to train our first algorithm with tensor flow too.

So it is time to run it.

Sadly the result is a bit underwhelming.

We get nothing more than an output signifying that the model has been trained and stored and objects

with no information about the training.

The reason is that we set verbose to zero which stands for silent or no output about the training is

displayed.

So therefore if we set verbose to one we should get a progress bar.

Let’s try this out.

Unfortunately for the current version on tensor flow and its integration with windows and Jupiter we

get the whole output in text form.

However it’s not such a big deal as we can still clearly see all the information needed for those of

you coding and other environments.

This may be a more pleasant experience like this one for instance because of this lousy output the cleanest

form of information about the training can be found when verbose is set to 2.

This indicates we will get one line per epoch which will allow us to follow the development of the loss

function over the training.

The first piece is a timer tracking the time and took in seconds to complete each epoch for all epochs

in this simple example.

It took zero seconds per epoch.

The second output available on each line is the current value of the lost function.

As we scroll down through the epochs we confirm that the loss is in fact decreasing.

So our algorithm has worked as intended.

All right.

As we discussed already we generated the function two times x minus three times Z plus five plus noise

in order to be able to assess how our model did.

You should be aware that in a real life situation you’d never know the exact relationship so it wouldn’t

be possible to confirm how well your model has fared.

Anyway if we check the weights and biases they should be two minus three and five.

There is a convenience built in method called Get weights that could be applied to each layer for this

purpose.

Therefore we write model dot layers square brackets zero get weights.

Model is the model that we created.

Next we must specify the layer we are interested in.

In that case the only layer so that at position 0.

Finally we have to apply the method get weights the output is a tensor with two arrays one for the weights

and one for the biases.

In this case bias as anticipated the weights are approximately 2 and minus 3 while the bias 5.

This is precisely the information which confirms that our algorithm has indeed learned the underlying

relationship.

Great.

What if we wanted to predict values using our model to predict values with our model.

We use the method predict on batch the batch here is the data that we provided with.

So let’s write model predict on batch and field with the training inputs.

The result comprises an array with a corresponding outputs for each of the inputs.

In fact these are the values that are compared to the targets to evaluate the loss function to be precise.

These are the outputs based on the train model or in our case the outputs.

After one hundred epochs of training since the outputs are compared to the targets at each epoch it

may be interesting to compare them manually to achieve that we can display the training targets and

round all values to one digit after the dot.

So they are easily readable.

What we see is that the outputs and the targets are very close to each other but not exactly the same.

All right.

Finally we can use the same technique as in our previous minimal example we can plot the outputs against

the targets since we expect them to be very close to each other the line should be as close to 45 degrees

as possible and that’s precisely what we get.

So we have successfully built our first machine learning algorithm with tensor flow too.

We can’t call that deep learning yet but we got somewhat acquainted with the package in the next lecture.

We’ll explore how to get it even closer to our number pi model.

Thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.