Backpropagation - visual representation

دوره: یادگیری عمیق با TensorFlow / فصل: Going deeper Introduction to deep neural networks / درس 8

Backpropagation - visual representation

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

OK great let’s look at the schematic illustration of back propagation shown here our net is quite simple.

It has a single hidden layer.

Each node is labeled.

So we have inputs x 1 and x 2 hidden layer units output layer units.

Why one in y2 two.

And finally the targets T1 and T2.

The weights are w 1 1 w 1 2 w 1 3 w 2 1 w 2 2 and W 2 3 for the first part of the net.

For the second part we named them you 1 1 you 1 2 you 2 1 you 2 2 3 1 and you 3 2.

So we can differentiate between the two types of weights.

That’s very important.

We know the error associated with Y 1 and y to as it depends on known targets.

So let’s call the two errors.

E 1 and 2.

Based on them we can adjust the weights labeled with you.

Each U contributes to a single error.

For example you 1 1 contributes to e 1.

Then we find its derivative and update the coefficient.

Nothing new here.

Now let’s examine w 1 1 1 1 1.

Helped us predict H-1 But then we needed h 1 to calculate Y one in y to.

Thus it played a role in determining both errors.

E one any two.

So while you one one contributes to a single error w 1 one contributes to both errors.

Therefore it’s adjustment rule must be different.

The solution to this problem is to take the errors and back propagate them through the net using the

weights.

Knowing the you weights we can measure the contribution of each hit in unit to the respective errors.

Then once we found out the contribution of each hit in unit to the respective errors we can update the

W weights.

So essentially through back propagation the algorithm identifies which weights lead to which errors

then it adjusts the weights that have a bigger contribution to the errors by more than the weights with

a smaller contribution.

A big problem arises when we must also consider the activation functions.

They introduce additional complexity to this process.

Lydia your contributions are easy but non-linear ones are tougher.

Emergent back propagating in our introductory net.

Once you understand it it seems very simple.

While pictorially straightforward mathematically it is rough to say the least.

That is why back propagation is one of the biggest challenges for the speed of an algorithm in the next

two lessons.

We’ll explain the mathematics behind back propagation if you would like to aquire a deep deep understanding

of deep nets.

We encourage you to proceed and watch them.

Otherwise feel free to skip to the next topic will examine overfitting.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.