![Trending Articles on Technical and Non Technical topics](/images/trending_categories.jpeg)
Data Structure
Networking
RDBMS
Operating System
Java
MS Excel
iOS
HTML
CSS
Android
Python
C Programming
C++
C#
MongoDB
MySQL
Javascript
PHP
Physics
Chemistry
Biology
Mathematics
English
Economics
Psychology
Social Studies
Fashion Studies
Legal Studies
- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who
Get Confirmed, Recovered, Deaths cases of Corona around the globe using Python
The covid pandemic has impacted billions of life all over the world. The pandemic led to widespread concern among the people. Several applications were built to determine and get accurate information about the total number of deaths, recovered cases, confirmed cases, etc. Fetching and analyzing this information is important for developers when they want to make applications around the pandemic. In this article, we will understand how to get the statistical data of the COVID−19 case.
Using APIs
APIs(Application Programming Interfaces) are important in modern programming and software development. This enables the software applications to interact with each other. It defines a set of protocols that other applications can use to interact with the software applications, exchange data, functionality, etc. APIs can take various forms, such as web APIs, library APIs, operating system APIs, or hardware APIs. Web APIs, often based on HTTP, are among the most common. They enable developers to access data and services over the internet by making HTTP requests to specific endpoints.
For our following example, we will use the following API:
https://disease.sh/v3/covid-19/allExample
In the following code, we have first imported all the necessary modules. Next built the fetch_data function, which we used to fetch the data in JSON format using the API. The process_data function will return a data frame out of the fetched data. analyze_cases returns the confirmed, recovered, and dead cases by preprocessing the data frame. Next, we created the visualize_data function, which plots the bar plot of the data. We used the matplotlib to plot the bar plot. We used a main function which is our driving code.
import requests import pandas as pd import matplotlib.pyplot as plt import seaborn as sns def fetch_data(url): response = requests.get(url) data = response.json() return data def process_data(data): df = pd.DataFrame(data, index=[0]) return df def analyze_cases(df): confirmed_cases = df['cases'].iloc[0] recovered_cases = df['recovered'].iloc[0] death_cases = df['deaths'].iloc[0] return confirmed_cases, recovered_cases, death_cases def visualize_data(confirmed_cases, recovered_cases, death_cases): labels = ['Confirmed', 'Recovered', 'Deaths'] values = [confirmed_cases, recovered_cases, death_cases] plt.figure(figsize=(8, 6)) sns.barplot(x=labels, y=values) plt.xlabel("Cases") plt.ylabel("Count") plt.title("Global COVID-19 Cases") plt.show() def main(): url = "https://disease.sh/v3/covid-19/all" data = fetch_data(url) df = process_data(data) confirmed_cases, recovered_cases, death_cases = analyze_cases(df) print("Global COVID-19 Cases:") print("Confirmed cases:", confirmed_cases) print("Recovered cases:", recovered_cases) print("Death cases:", death_cases) visualize_data(confirmed_cases, recovered_cases, death_cases) if __name__=='__main__': main()
Output
Global COVID-19 Cases: Confirmed cases: 690148376 Recovered cases: 662646473 Death cases: 6890206
![](https://www.tutorialspoint.com/assets/questions/media/616697-1689664542.png)
Using Web Scrapping By Beautifulsoup
BeautifulSoup is a popular library of Python for web scrapping. Web scraping is a process of extracting meaningful data from the web. The library helps to parse the documents, which are in HTML and XML formats. This library provides a powerful and convenient method to search and filter the elements within the parsed document. For our use case, we can first fetch the data using the requests library of Python and then extract the texts using the BeautifulSoup library.
The requests library in Python helps to interact with the web services and deal with the HTTP requests. The library provides a user−friendly way to send requests and handle the response. With the requests library, developers can easily send various HTTP requests, such as GET, POST, PUT, DELETE, and more. It supports different types of data payloads, including URL−encoded form data, JSON, and file uploads, making it versatile for different web interactions.
Example
In the following code, we have first imported the request, BeautifulSoup, seaborn, etc. We created the function fetch_data, using the requests library to fetch the data and return the HTML−parsed content. The extract_case function utilizes the BeaautifulSoup method to extract the required information. visualize_data function takes the confirmed cases, recovered cases, and death cases as the arguments and plots a bar plot using the matplotlib library. We created the main function, which is the driver code of our program.
import requests from bs4 import BeautifulSoup import matplotlib.pyplot as plt import seaborn as sns def fetch_data(url): response = requests.get(url) soup = BeautifulSoup(response.content, 'html.parser') return soup def extract_cases(soup): confirmed_cases = int(soup.find('div', class_='maincounter-number').span.text.replace(',', '')) recovered_cases = int(soup.find_all('div', class_='maincounter-number')[2].span.text.replace(',', '')) death_cases = int(soup.find_all('div', class_='maincounter-number')[1].span.text.replace(',', '')) return confirmed_cases, recovered_cases, death_cases def visualize_data(confirmed_cases, recovered_cases, death_cases): labels = ['Confirmed', 'Recovered', 'Deaths'] values = [confirmed_cases, recovered_cases, death_cases] plt.figure(figsize=(8, 6)) sns.barplot(x=labels, y=values) plt.xlabel("Cases") plt.ylabel("Count") plt.title("Global COVID-19 Cases") plt.show() def main(): url = "https://www.worldometers.info/coronavirus/" soup = fetch_data(url) confirmed_cases, recovered_cases, death_cases = extract_cases(soup) print("Global COVID-19 Cases:") print("Confirmed cases:", confirmed_cases) print("Recovered cases:", recovered_cases) print("Death cases:", death_cases) visualize_data(confirmed_cases, recovered_cases, death_cases) if __name__=='__main__': main()
Output
Global COVID-19 Cases: Confirmed cases: 690148376 Recovered cases: 662646473 Death cases: 6890206
![](https://www.tutorialspoint.com/assets/questions/media/616697-1689664619.png)
Using Webscripping By selenium Library
Selenium is a powerful library of Python for automating web browsers. It allows programmers to control web browsers through programs. This helps them to automate the tasks. Selenium supports various web browsers, including Chrome, Firefox, Safari, and Microsoft Edge. It works by interacting with the browser through a WebDriver, which bridges the Selenium library and the browser. We can use Selenium to automate retrieving the web page content for our specific needs.
Example
In the following example, we have first imported the necessary modules and libraries like web driver, BeautifulSoup, etc. Next, we created the driver object using the web driver method. We used the requests library to get the response from the URL. The gety_html function returns the HTML content. The extract_cases function is a user−defined function that extracts the data from the HTML−parsed texts. Next, we created the visualize_data, which plots a bar graph from the data. The main function contains our driving code.
from selenium import webdriver from bs4 import BeautifulSoup import matplotlib.pyplot as plt import seaborn as sns def get_html(url): driver = webdriver.Chrome() driver.get(url) html = driver.page_source driver.quit() return html def extract_cases(soup): confirmed_cases = int(soup.find('div', class_='maincounter-number').span.text.replace(',', '')) recovered_cases = int(soup.find_all('div', class_='maincounter-number')[2].span.text.replace(',', '')) death_cases = int(soup.find_all('div', class_='maincounter-number')[1].span.text.replace(',', '')) return confirmed_cases, recovered_cases, death_cases def visualize_data(confirmed_cases, recovered_cases, death_cases): labels = ['Confirmed', 'Recovered', 'Deaths'] values = [confirmed_cases, recovered_cases, death_cases] plt.figure(figsize=(8, 6)) sns.barplot(x=labels, y=values) plt.xlabel("Cases") plt.ylabel("Count") plt.title("Global COVID-19 Cases") plt.show() def main(): url = "https://www.worldometers.info/coronavirus/" html = get_html(url) soup = BeautifulSoup(html, 'html.parser') confirmed_cases, recovered_cases, death_cases = extract_cases(soup) print("Global COVID-19 Cases:") print("Confirmed cases:", confirmed_cases) print("Recovered cases:", recovered_cases) print("Death cases:", death_cases) visualize_data(confirmed_cases, recovered_cases, death_cases) if __name__ == '__main__': main()
Output
Global COVID-19 Cases: Confirmed cases: 690148376 Recovered cases: 662646473 Death cases: 6890206
![](https://www.tutorialspoint.com/assets/questions/media/616697-1689664698.png)
Conclusion
In this article, we understood how to get the statistical data on COVID−19. We both analyzed as well as visualized the data. Python is a general−purpose scripting language that offers various ways to handle web scraping and APIs. Hence we have many options like APIs, web scraping, etc., to get the relevant data. We first used the APIs given by third−party applications to get relevant data. Next, we used the concept of web scrapping to extract the required data from the web pages.