Why do we need non-linearities

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

We said non-linearities are needed so we can represent more complicated relationships.

While that is true it isn’t the full picture an important consequence of including non-linearities is

the ability to stack Lears stacking layers is the process of placing one layer after the other in a

meaningful way.

Remember that it’s fundamental.

The point we will make is that we cannot stack Lears when we have only linear relationships.

Let’s prove it.

Imagine we have a single hidden layer and there are no non-linearities.

So our picture looks this way.

There are eight input nodes nine head and nodes in the hidden layer and four output nodes.

Therefore we have an eight by nine Waites matrix.

When your relationship between the input layer and the hidden layer Let’s call this matrix W.

one.

The hidden units age according to the linear model H is equal to x times w 1.

Let’s ignore the biases for a while.

So our hidden units are summarized in the matrix H with a shape of one by nine.

Now let’s get to the output layer from the hidden layer once again according to the linear model Y is

equal to h times W2 we have W2 as these weights are different.

We already know the H matrix is equal to x times.

W1 Right.

Let’s replace h in this equation Y is equal to x times w 1 times.

W2 but w 1 and w 2 can be multiplied right.

What we get is a combined matrix W star with dimensions 8 by 4 well then our deep net can be simplified

into a linear model which looks this way y equals x times w star knowing that we realize the hidden

layer is completely useless in this case.

We can just train this simple linear model and we would get the same result in mathematics.

This seems like an obvious fact but in machine learning it is not so clear from the beginning.

The two consecutive linear transformations are equivalent to a single one.

Even if we add 100 layers the problem would be simplified to a single transformation.

That is the reason we need non-linearities.

Without them stacking layers one after the other is meaningless and without stacking layers we will

have no depth.

What’s more with no depth.

Each and every problem will equal the simple linear example we did earlier.

And many practitioners would tell you it was borderline machine learning.

All right let’s summarize in one sentence.

You have deep nets and find complex relationships through arbitrary functions.

We need non-linearities.

Point taken.

Thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.