data collection by web scrapping with python

Adityalalwani
4 min readAug 2, 2021

Imagine you have to pull a large amount of data from websites and you want to do it as quickly as possible. How would you do it without manually going to each website and getting the data? Well, “Web Scraping” is the answer. Web Scraping just makes this job easier and faster.

In this article on Web Scraping with Python, you will learn about web scraping in brief and see how to extract data from a website with a demonstration. I will be covering the following topics:

  • Why is Web Scraping Used?
  • What Is Web Scraping?
  • Why is Python Good For Web Scraping?
  • Libraries used for Web Scraping
  • Web Scraping Example : Scraping GitHub Search Website

Why is Web Scraping Used?

Web scraping is used to collect large information from websites. But why does someone have to collect such large data from websites? To know about this, let’s look at the applications of web scraping:

  • Price Comparison: Services such as ParseHub use web scraping to collect data from online shopping websites and use it to compare the prices of products.
  • Email address gathering: Many companies that use email as a medium for marketing, use web scraping to collect email ID and then send bulk emails.
  • Social Media Scraping: Web scraping is used to collect data from Social Media websites such as Twitter to find out what’s trending.
  • Research and Development: Web scraping is used to collect a large set of data (Statistics, General Information, Temperature, etc.) from websites, which are analyzed and used to carry out Surveys or for R&D.
  • Job listings: Details regarding job openings, interviews are collected from different websites and then listed in one place so that it is easily accessible to the user.

What is Web Scraping?

Web scraping is an automated method used to extract large amounts of data from websites. The data on the websites are unstructured. Web scraping helps collect these unstructured data and store it in a structured form. There are different ways to scrape websites such as online Services, APIs or writing your own code. In this article, we’ll see how to implement web scraping with python.

See the source image
from: https://medium.com/@nageshsinghchauhan/scrap-multiple-websites-and-extract-contact-information-from-each-of-them-using-python-267346bd367

Why is Python Good for Web Scraping?

Here is the list of features of Python which makes it more suitable for web scraping.

  • Ease of Use: Python is simple to code. You do not have to add semi-colons “;” or curly-braces “{}” anywhere. This makes it less messy and easy to use.
  • Large Collection of Libraries: Python has a huge collection of libraries such as Numpy, Matlplotlib, Pandas etc., which provides methods and services for various purposes. Hence, it is suitable for web scraping and for further manipulation of extracted data.
  • Dynamically typed: In Python, you don’t have to define datatypes for variables, you can directly use the variables wherever required. This saves time and makes your job faster.
  • Easily Understandable Syntax: Python syntax is easily understandable mainly because reading a Python code is very similar to reading a statement in English. It is expressive and easily readable, and the indentation used in Python also helps the user to differentiate between different scope/blocks in the code.

Libraries used for Web Scraping

  • BeautifulSoup: Beautiful Soup is a Python package for parsing HTML and XML documents. It creates parse trees that is helpful to extract the data easily.
  • Pandas: Pandas is a library used for data manipulation and analysis. It is used to extract the data and store it in the desired format.
  • Requests: Requests library is one of the integral part of Python for making HTTP requests to a specified URL.

Web Scraping Example : Scraping Flipkart Website

Step 1: Find the URL that you want to scrape

For this example, we are going scrape Flipkart website to extract the Price, Name, and Rating of Laptops. The URL for this page is https://github.com/search?q=web+scraping.

Step 2: Inspecting the Page

The data is usually nested in tags. So, we inspect the page to see, under which tag the data we want to scrape is nested. To inspect the page, just right click on the element and click on “Inspect”.When you click on the “Inspect” tab, you will see a “Browser Inspector Box” open.

Step 3: Find the data you want to extract

Let’s extract the Project Name and Details which is in the “div” tag respectively.

Step 4: Write the code

from selenium import webdriver
from BeautifulSoup import BeautifulSoup
import pandas as pd

To configure webdriver to use Chrome browser, we have to set the path to chromedriver

driver = webdriver.Chrome(“/usr/lib/chromium-browser/chromedriver”)

Refer the below code to open the URL:

projects=[] #List to store name of the project

details=[] #List to store details of the project

driver.get(“https://github.com/search?q=web+scraping")

content = driver.page_source

soup = BeautifulSoup(content, “html.parser”)

names = soup.findAll(‘div’, attrs={‘class’:’f4 text-normal’});

detail= soup.findAll(‘p’,attrs={‘class’:’mb-1'})

GitHub Link:
AdityaLalwani/DATA-SCIENCE (github.com)

--

--