- Data Structure
- Networking
- RDBMS
- Operating System
- Java
- MS Excel
- iOS
- HTML
- CSS
- Android
- Python
- C Programming
- C++
- C#
- MongoDB
- MySQL
- Javascript
- PHP
- Physics
- Chemistry
- Biology
- Mathematics
- English
- Economics
- Psychology
- Social Studies
- Fashion Studies
- Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
How to perform in-place operations in PyTorch?
In-place operations directly change the content of a tensor without making a copy of it. Since it does not create a copy of the input, it reduces the memory usage when dealing with high-dimensional data. An in-place operation helps to utilize less GPU memory.
In PyTorch, in-place operations are always post-fixed with a "_", like add_(), mul_(), etc.
Steps
To perform an in-place operation, one could follow the steps given below −
Import the required library. The required library is torch.
Define/create tensors on which in-place operation is to be performed.
Perform both normal and in-place operations to see the clear difference between them.
Display the tensors obtained in normal and in-place operations.
Example 1
The following Python program highlights the difference between a normal addition and an in-place addition. In in-place addition, the value of the first operand "x" is changed; while in normal addition, it remains unchanged.
# import required library import torch # create two tensors x and y x = torch.tensor(4) y = torch.tensor(3) print("x=", x.item()) print("y=", y.item()) # Normal addition z = x.add(y) print("Normal Addition x:",x.item()) # In-place addition z = x.add_(y) print("In-place Addition x:",x.item())
Output
x = 4 y = 3 Normal Addition x: 4 In-place Addition x: 7
In the above program, two tensors x and y are added. In normal addition operation, the value of x is not changed, but in in-place addition operation, it's changed.
Example 2
The following Python program shows how the normal addition and in-place addition operations are different in terms of memory allocation.
# import required library import torch # create two tensors x and y x = torch.tensor(4) y = torch.tensor(3) print("id(x)=", id(x)) # Normal addition z = x.add(y) print("Normal Addition id(z):",id(z)) # In-place addition z = x.add_(y) print("In-place Addition id(z):",id(z))
Output
id(x)= 63366656 Normal Addition id(z): 63366080 In-place Addition id(z): 63366656
In the above program, normal operation allocates new memory location for "z", whereas in-place operation does not allocate new memory.