![]() The underlying logic is the same for higher dimensional tenors even though we may not be able to use the intuition of rows and columns in In our exampleĬolumns when we are dealing with a rank 2 tensor. Notice how all of the shapes have to account for the number of elements in the tensor. Reshape() function, we can specify the row x column shape that we are seeking. T can be reshaped without changing the rank: Let's look now at all the ways in which this tensor Our tensor has 12 elements, so any reshaping must account for exactly Reshaping changes the tensor's shape but not the underlying data. The number of elements contained within a tensor is important for reshaping because the reshaping must account for the total number of elements present. In PyTorch, there is a dedicated function for this: The number of elements inside a tensor (12 in our case) is equal to the product of the shape's component values. We can also deduce the number of elements contained within the tensor. The rank of a tensor is equal to the length of the tensor's shape. Typically, after we know a tensor's shape, we can deduce a couple of things. In PyTorch, we have two ways to get the shape: Rank is a word that is commonly used and just means the number of dimensions present within the tensor. To determine the shape of this tensor, we look first at the rows Suppose that we have the following tensor: Instead of producing pizzas, we are producing intelligence! This may be lame, but whatever. Our networks operate on tensors, after all, and this is why understanding a tensor's shape and the available reshaping operations are super important. The dough is the input used to create an output, but before the pizza is produced there is usually some form of reshaping of the input that is required.Īs neural network programmers, we have to do the same with our tensors, and usually shaping and reshaping our tensors is a frequent task. This is very similar to how a baker uses dough to produce, say, a pizza. Tensors are the primary ingredient that neural network programmers use to produce their product, intelligence. Tensors have properties, mathematical and otherwise, that allow us to do our work. The primary ingredient we use to produce our product, a function that maps inputs to correct outputs, is data.ĭata is somewhat of an abstract concept, so when we want to actually use the concept of data to implement something, we use a specific data structure called a tensor that can be efficiently implemented Our task is to build neural networks that can transform or map input data to the correct output we are seeking. ![]() We use math tools like calculus and linear algebra, computer science tools like Python and PyTorch, physics and engineering tools like CPUs and GPUs, and machine learning tools like neural networks, layers, activation functions, etc. Suppose we are neural network programmers, and as such, we typically spend our days building neural networks. The post where we introduced tensors, the shape of a tensor gives us something concrete we can use to Reshaping operations are perhaps the most important type of tensor operations. Let's jump in now with reshaping operations. Keep this in mind and work towards understanding these categories as we explore each of them. ![]() Having knowledge of the types of operations that exist can stay with us longer than just knowing The goal of these posts on tensor operations is to not only showcase specific tensor operations commonly used, but to also describe the operation landscape. The reason for showing these categories is to give you the goal of understanding all four of these by the end of this section in the series. There are a lot of individual operations out there, so much so that it can sometimes be intimidating when you're just beginning, but grouping similar operations into categories based on their likeness can help make learning about tensor operations ![]() We have the following high-level categories of operations: Without further ado, let's get started.īefore we dive in with specific tensor operations, let's get a quick overview of the landscape by looking at the main operation categories that encompass the operations we'll cover. We'll kick things off with reshaping operations. Starting with this post in this series, we'll begin using the knowledge we've learned about tensors up to this point and start covering essential tensor operations for neural networks Welcome back to this series on neural network programming. Reshaping operations - Tensors for deep learning
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |