Why is Linear Algebra Useful

دوره: یادگیری عمیق با TensorFlow / فصل: Appendix Linear Algebra Fundamentals / درس 11

Why is Linear Algebra Useful

توضیح مختصر

  • زمان مطالعه 0 دقیقه
  • سطح خیلی سخت

دانلود اپلیکیشن «زوم»

این درس را می‌توانید به بهترین شکل و با امکانات عالی در اپلیکیشن «زوم» بخوانید

دانلود اپلیکیشن «زوم»

فایل ویدیویی

برای دسترسی به این محتوا بایستی اپلیکیشن زبانشناس را نصب کنید.

متن انگلیسی درس

We’ve seen a lot of matrix operations so far.

But the big question for most of you remains why is linear algebra actually useful.

There are very many applications of linear algebra in data science in particular.

There are several ones of high importance which we will explore and use later on.

Some are easy to grasp.

Others not just yet in this lesson we will explore three of them.

Vectorizing code also known as Array programming.

Image recognition dimensionality reduction.

OK let’s start from the simplest and probably the most commonly used one vectorized code.

We can certainly claim that the price of a house depends on its size.

Suppose you know that the exact relationship for some neighborhood is given by the equation.

Price equals ten thousand one hundred and ninety plus two hundred and twenty three times size.

Moreover you know the sizes of five houses six hundred ninety three 656 1060 487 and one thousand two

hundred and seventy five square feet.

What you want to do is plug in each size in the equation and find the price of each house.

Right well for the first one we get ten thousand one hundred and ninety plus two hundred and twenty

three times six hundred ninety three equals 164 thousand seven hundred and twenty nine.

Then we can find the next one and so on until we find all prices.

Now if we have 100 houses doing that by hand would be quite tedious wouldn’t it.

One way to deal with that problem is by creating a loop.

You can iterate over the sizes multiplying each of them by 223 and adding ten thousand one hundred ninety.

However we are smarter than that aren’t we.

We know some linear algebra already.

Let’s explore these two objects.

A five by two Matrix and a vector of length to the matrix contains a column of ones and another with

the sizes of the houses.

The vector contains ten thousand one hundred and ninety and 223.

The numbers from the equation.

If we go about multiplying them we will get a vector of length 5.

The first element will be equal to 1 times ten thousand one hundred and ninety plus six hundred and

ninety three times 223.

The second to one times ten thousand one hundred ninety plus 656 times 223 and so on by inspecting these

expressions we quickly realize that the resulting vector contains all the manual calculations we made

earlier to find the prices in machine learning and linear regressions in particular.

This is exactly how algorithms work.

We’ve got an input’s matrix of weights or a coefficient matrix and an output matrix without diving too

deep into the mechanics of it.

Here let’s note something.

If we have 10000 inputs the initial matrix would be 10000 thousand by 2 right.

The weights matrix would still be two by one.

When we multiply them the resulting output matrix would be 10000 by 1.

This shows us that no matter the number of inputs we will get just as many outputs.

Moreover the equation doesn’t change as it only contain the two coefficients.

Ten thousand one hundred ninety and 223 this concept will turn quite helpful in your machine learning

studies later on.

All right.

So whenever we are using linear algebra to compute many values simultaneously we call this array programming

or vectorizing code it is important to stress that array programming is much much faster.

There are libraries such as pi that are optimized for performing this kind of operations which greatly

increases the computational efficiency of our code.

OK what about image recognition in the last few years.

Deep learning and deep neural networks in particular Colcord image recognition on the forefront are

convolutional neural networks or CNN.

In short what is the basic idea you can take a photo feed it to the algorithm and classify it.

Famous examples are the edness data set where the task is to classify handwritten digits cipher or 10

where the task is to classify animals and vehicles and so far 100 where you have 100 different classes

of images.

The problem is that we cannot just take a photo and give it to the computer.

We must design a way to turn that photo into numbers in order to communicate the image to the computer.

Here’s where linear algebra comes in.

Each photo has some dimensions right.

Say this photo is 400 by 400 pixels each pixel and a photo is basically a colored square.

Given enough pixels and a big enough zoom out causes our brain to perceive this as an image rather than

a collection of squares.

Let’s dig into that.

Here’s a simple gray scale photo.

The gray scale contains 256 shades of gray where 0 is totally white and 255 is totally black or vice

versa.

We can actually express this photo as a matrix.

If the photo is 400 by 400 then that’s a 400 by 400 matrix each element of that matrix is a number from

0 to 255.

It shows the intensity of the color gray and that pixel that’s how the computer sees a photo but gray

scale is boring isn’t it.

What about colored photos.

Well so far we had two dimensions width and height while the number inside corresponded to the intensity

of color.

What if we want more colors.

Well one solution mankind has come up with is the G-B scale where R G B stands for red green and blue.

The idea is that any color perceivable by the human eye can be decomposed into some combination of red

green and blue or the intensity of each color is from 0 to 255 a total of 256 shades.

In order to represent a colored photo and solve linear algebraic form we must take the example from

before and add another dimension color so instead of a 400 by 400 matrix we get a three by 400 by 400

tensor this tensor contains 3 400 by 400 matrices one for each color red green and blue and that’s how

deep neural networks work with photos.

Great

finally dimensionality reduction since we haven’t seen eigenvalues and eigenvectors yet there is not

much to say here except for developing some intuition imagine we have a data set with three variables.

Visually our data may look like this in order to represent each of those points.

We have used three values one for each variable x y and z.

Therefore we are dealing with an end by three Matrix.

So the point I corresponds to a vector x y y and z.

I know that those three variables x y and z are the three axes of this plane is where it becomes interesting

in some cases we can find a plane very close to the data.

Something like this this plane is two dimensional so it is defined by two variables say u and v.

Not all points lie on this plane but we can approximately say that they do.

Linear Algebra provides us with fast and efficient ways to transform our initial matrix from and by

3 where the three variables are X Y and Z into a new matrix which is and by two are the two variables

are u and v.

In this way instead of having three variables we reduce the problem to two variables.

In fact if you have 50 variables you can reduce them to 40 or 20 or even 10.

How does that relate to the real world.

Why does it make sense to do that.

Well imagine a survey where there is a total of 50 questions.

Three of them are the following.

Please rate from one to five.

One I feel comfortable around people too.

I easily make friends and three I like going out now.

These questions may seem different but in the general case they aren’t.

They all measure your level of extroversion so it makes sense to combine them.

Right.

That’s where dimensionality reduction techniques and linear algebra come in very very often.

We have too many variables that are not so different.

So we want to reduce the complexity of the problem by reducing the number of variables.

All right.

Now that you are convinced this new knowledge is useful Let’s go back to learn some more linear algebra.

Thanks for watching.

مشارکت کنندگان در این صفحه

تا کنون فردی در بازسازی این صفحه مشارکت نداشته است.

🖊 شما نیز می‌توانید برای مشارکت در ترجمه‌ی این صفحه یا اصلاح متن انگلیسی، به این لینک مراجعه بفرمایید.