![Trending Articles on Technical and Non Technical topics](/images/trending_categories.jpeg)
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
Physics
Chemistry
Biology
Mathematics
English
Economics
Psychology
Social Studies
Fashion Studies
Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Found 27154 Articles for Server Side Programming
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
1K+ Views
Pandas is an open-source Python library used for data analysis and manipulation. Pandas provides functionality for data cleaning, transformation, and filtering. In large datasets, some extreme values called outliers can modify the data analysis result. In order to identify those outliers, a robust statistical measure called the Interquartile range (IQR) is used. In this article, we will understand how pandas filter with the IQR to identify and handle outliers in the dataset. Understanding the Interquartile Range (IQR) Before understanding how to use the Pandas filter with IQR, let’s briefly understand what is Interquartile range(IQR). Quartile divides a dataset into four ... Read More
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
498 Views
Pandas is a Python library that is used for data manipulation and analysis of structured data. The cut() and qcut() methods of pandas are used for creating categorical variables from numerical data. The cut() and qcut() methods split the numerical data into discrete intervals or quantiles respectively and assign labels to each interval or quantile. In this article, we will understand the functionalities of the cut() and qcut() methods with the help of various examples. The cut() Function The cut() divides a continuous variable into discrete bins or intervals based on specified criteria. It creates groups or categories of ... Read More
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
415 Views
The apply() function in pandas is used to apply a custom function to the data frame or series. The apply() function can be used to perform transformations, computation, and other operations on the data. The apply() function returns a new Data frame or series by default. We can also modify the dataframe or series by using the inplace parameter of the apply() function. In this article, we will understand how we can use apply() function inplace with the help of examples. Syntax of apply() Function df.apply(func, axis=0) Here, df is the dataframe on which we need to apply ... Read More
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
4K+ Views
The Numpy where() function allows us to perform element-wise conditional operations on array. Numpy is a Python library that is used for numerical computation and data manipulation. To use where() method with multiple conditions in Python we can use logical operators like & (and), | (or) and ~ (not). In this article, we will explore some examples to use numpy where() with multiple method in Python. Syntax of where() Method numpy.where(condition, x, y) Here, the `condition` parameter is a boolean array or a condition that evaluates to a boolean array. The x and y are arrays which ... Read More
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
465 Views
The paragraphs can be scraped using the Beautiful Soup library of Python. BeautifulSoup is a Python library that allows us to parse HTML and XML documents effortlessly. It provides a convenient way to navigate and search the parsed data, making it an ideal choice for web scraping tasks. By utilizing its robust features, we can extract specific elements, such as paragraphs, from web pages. In this article, we will scrape paragraphs using the Beautiful Soup library of Python. Installing the Required Libraries Before scraping the paragraph, we need to install the necessary libraries. Open your terminal or command prompt and ... Read More
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
1K+ Views
The data of the local HTML file can be extracted using Beautiful Soup and Python file handling techniques. Beautiful Soup allows us to parse HTML documents and navigate their structure, while file handling enables us to fetch the HTML content from local files. By combining these tools, we can learn how to extract valuable data from HTML files stored on our computers. In this article, we will understand how we can scrape Data from Local HTML files using Python. Prerequisites Before understanding how to scrape data from local HTML files, make sure you have Python installed on your machine. ... Read More
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
3K+ Views
Google Maps is a powerful tool that provides a vast amount of geospatial data, including locations, addresses, reviews, ratings, and more. Being able to extract this data programmatically can be immensely useful for various applications such as business analysis, research, and data-driven decision-making. In this article, we will explore how to scrape data from Google Maps using Python. Step 1: Install Required Libraries To begin with, we need to install the necessary Python libraries that will facilitate the web scraping process. Open your command prompt or terminal and run the following commands: pip install requests pip install beautifulsoup4 ... Read More
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
1K+ Views
Web scraping is a powerful technique used to extract data from websites. One popular library for web scraping in Python is BeautifulSoup. BeautifulSoup provides a simple and intuitive way to parse HTML or XML documents and extract the desired information. In this article, we will explore how to scrape all the text from the tag of a web page using BeautifulSoup in Python. Algorithm The following algorithm outlines the steps to scrape all text from the body tag using BeautifulSoup: Import the required libraries: We need to import the requests library to make HTTP requests and the BeautifulSoup ... Read More
![Rohan Singh](https://www.tutorialspoint.com/assets/profiles/638564/profile/60_146647-1681360804.jpg)
2K+ Views
Scaling is the process of preprocessing the data in data analysis and ensuring that all the features in a dataset have similar ranges, making them more comparable and reducing the impact of different scales on machine learning algorithms. We can scale Pandas dataframe columns using methods like Min-max scaling, standardization, Robust scaling, and log transformation. In this article we will dive into the process of scaling pandas dataframe scaling using various methods. Why Scaling is Important? Some features in the data may have larger values which can dominate when the analysis or model training is done. Scaling ensures ... Read More
![Kalyan Mishra](https://www.tutorialspoint.com/assets/profiles/493144/profile/60_3341000-1658833471.jpg)
99 Views
In this article we will get to know about various method using which we can find the prefix tuple records using python programming language. Tuple is an immutable sequence-like list whose value cannot be changed once assigned. Here prefix tuple is a collection of tuples which have a common prefix. Each element of tuple can be of any data type or it can also be in mixed form. Example records = [('Kalyan', 123), ('Gungun', 122), ('Komal', 456), ('Isestru', 789), ('Kosal', 321)] prefix = 'Ko' Here you can see we have list of records where tuples are there in ... Read More