Puppeteer is a project from the Google Chrome team which enables us to control a Chrome (or any other Chrome DevTools Protocol based browser) and execute common actions, much like in a real browser - programmatically, through a decent API. Put simply, it’s a super useful and easy tool for automating, testing and scraping web pages over a headless mode or headful either.
In this article we’re going to try out Puppeteer and demonstrate a variety of the available capabilities, through concrete examples.
Disclaimer: This article doesn’t claim to replace the official documentation but rather elaborate it - you definitely should go over it in order to be aligned with the most updated API specification.
Checkly does in-depth API monitoring and synthetic monitoring using Puppeteer. It lets us run Puppeteer scripts every couple of minutes or trigger them from the continuous integration pipeline. Check it out during the article or afterwards.
How to Install
To begin with, we’ll have to install one of Puppeteer’s packages.
A lightweight package, called
puppeteer-core, which is a library that interacts with any browser that’s based on DevTools protocol - without actually installing Chromium. It comes in handy mainly when we don’t need a downloaded version of Chromium, for instance, bundling this library within a project that interacts with a browser remotely.
In order to install, just run:
npm install puppeteer-core
The main package, called
puppeteer, which is actually a full product for browser automation on top of
puppeteer-core. Once it’s installed, the most recent version of Chromium is placed inside
node_modules, what guarantees that the downloaded version is compatible with the host operating system.
Simply run the following to install:
npm install puppeteer
Now, we’re absolutely ready to go! 🤓
As mentioned before, Puppeteer is just an API over the Chrome DevTools Protocol. Naturally, it should have a Chromium instance to interact with. This is the reason why Puppeteer’s ecosystem provides methods to launch a new Chromium instance and connect an existing instance also.
Let’s examine a few cases.
The easiest way to interact with the browser is by launching a Chromium instance using Puppeteer:
launch method initializes the instance at first, and then attaching Puppeteer to that.
Notice this method is asynchronous (like most Puppeteer’s methods) which, as we know, returns a
Promise. Once it’s resolved, we get a browser instance that represents our initialized instance.
Sometimes we want to interact with an existing Chromium instance - whether using
puppeteer-core or just attaching a remote instance:
Well, it’s easy to see that we use chrome-launcher in order to launch a Chrome instance manually.
Then, we simply fetch the
webSocketDebuggerUrl value of the created instance.
connect method attaches the instance we just created to Puppeteer. All we’ve to do is supplying the WebSocket endpoint of our instance.
Note: Of course, chrome-launcher is only to demonstrate an instance creation. We absolutely could connect an instance in other ways, as long as we have the appropriate WebSocket endpoint.
Some of you might wonder - could Puppeteer interact with other browsers besides Chromium? 🤔
Although there are projects that claim to support the variety browsers - the official team has started to maintain an experimental project that interacts with Firefox, specifically:
npm install puppeteer-firefox
puppeteer-firefox was an experimental package to examine communication with an outdated Firefox fork, however, this project is no longer maintained. Presently, the way to go is by setting the
PUPPETEER_PRODUCT environment variable to
firefox and so fetching the binary of Firefox Nightly.
We can easily do that as part of the installation:
PUPPETEER_PRODUCT=firefox npm install puppeteer
Alternatively, we can use the BrowserFetcher to fetch the binary.
Once we’ve the binary, we merely need to change the
product to “firefox” whereas the rest of the lines remain the same - what means we’re already familiar with how to launch the browser:
⚠️ Pay attention - the API integration isn’t totally ready yet and implemented progressively. Also, it’s better to check out the implementation status here.
Imagine that instead of recreating a browser instance each time, which is pretty expensive operation, we could use the same instance but separate it into different individual sessions which belong to this shared browser.
It’s actually possible, and these sessions are known as Browser Contexts.
A default browser context is created as soon as creating a browser instance, but we can create additional browser contexts as necessary:
Apart from the fact that we demonstrate how to access each context, we need to know that the only way to terminate the default context is by closing the browser instance - which, in fact, terminates all the contexts that belong to the browser.
Better yet, the browser context also come in handy when we want to apply a specific configuration on the session isolatedly - for instance, granting additional permissions.
As opposed to the headless mode - which merely uses the command line, the headful mode opens the browser with a graphical user interface during the instruction:
Because of the fact that the browser is launched in headless mode by default, we demonstrate how to launch it in a headful way.
In case you wonder - headless mode is mostly useful for environments that don’t really need the UI or neither support such an interface. The cool thing is that we can headless almost everything in Puppeteer. 💪
Note: We’re going to launch the browser in a headful mode for most of the upcoming examples, which will allow us to notice the result clearly.
When writing code, we should be aware of what kinds of ways are available to debug our program. The documentation lists several tips about debugging Puppeteer.
Let’s cover the core principles:
1️⃣ - Checking how the browser is operated
That’s fairly probable we would like to see how our script instructs the browser and what’s actually displayed, at some point.
The headful mode, which we’re already familiar with, helps us to practically do that:
Beyond that the browser is truly opened, we can notice now the operated instructions clearly - due to
slowMo which slows down Puppeteer when performing each operation.
2️⃣ - Debugging our application code in the browser
In case we want to debug the application itself in the opened browser - it basically means to open the DevTools and start debugging as usual:
Notice that we use
devtools which launches the browser in a headful mode by default and opens the DevTools automatically.
On top of that, we utilize
waitForTarget in order to hold the browser process until we terminate it explicitly.
Apparently - some of you may wonder if it’s possible to sleep the browser with a specified time period, so:
The first approach is merely a function that resolves a promise when
The second approach, however, is much simpler but demands having a page instance (we’ll get to that later).
3️⃣ - Debugging the process that uses Puppeteer
As we know, Puppeteer is executed in a Node.js process - which is absolutely separated from the browser process. Hence, in this case, we should treat it as much as we debug a regular Node.js application.
Whether we connect to an inspector client or prefer using ndb -
it’s all about placing the breakpoints right before Puppeteer’s operation. Adding them programmatically is possible either, simply by inserting the
debugger; statement, obviously.
Now that Puppeteer is attached to a browser instance - which, as we already mentioned, represents our browser instance (Chromium, Firefox, whatever), allows us creating easily a page (or multiple pages):
In the code example above we plainly create a new page by invoking the
newPage method. Notice it’s created on the default browser context.
Page is a class that represents a single tab in the browser (or an extension background).
As you guess, this class provides handy methods and events in order to interact with the page (such as selecting elements, retrieving information, waiting for elements, etc.).
Well, it’s about time to present a list of practical examples, as promised. To do this, we’re going to scrape data from the official Puppeteer website and operate it.🕵
Navigating by URL
One of the earliest things is, intuitively, instructing the blank page to navigate to a specified URL:
goto to drive the created page to navigate Puppeteer’s website. Afterward, we just take the title of Page’s main frame, print it, and expect to get that as an output:
As we notice, the title is unexpectedly missing. 🧐
This example shows us which there’s no guarantee that our page would render the selected element at the right moment, and if anything. To clarify - possible reasons could be that the page is loaded slowly, part of the page is lazy-loaded, or perhaps it’s navigated immediately to another page.
That’s exactly why Puppeteer provides methods to wait for stuff like elements, navigation, functions, requests, responses or simply a certain predicate - mainly to deal with an asynchronous flow.
Anyway, it turns out that Puppeteer’s website has an entry page, which immediately redirects us to the well-known website’s index page.
The thing is, that entry page in question doesn’t render a
title meta element:
When navigating to Puppeteer’s website, the
title element is evaluated as an empty string. However, a few moments later, the page is really navigated to the website’s index page and rendered with a title.
This means that the invoked
title method is actually applied too early, on the entry page, instead of the website’s index page. Thus, the entry page is considered as the first main frame, and eventually its title, which is an empty string, is returned.
Let’s solve that case in a simple way:
All we do, is instructing Puppeteer to wait until the page renders a
title meta element, which is achieved by invoking
waitForSelector. This method basically waits until the selected element is rendered within the page.
In that way - we can easily deal with asynchronous rendering and ensure that elements are visible on the page.
Puppeteer’s library provides tools for approximating how the page looks and behaves on various devices, which are pretty useful when testing a website’s responsiveness.
Let’s emulate a mobile device and navigate to the official website:
We choose to emulate an iPhone X - which means changing the user agent appropriately. Furthermore, we adjust the viewport size according to the display points that appear here.
It’s easy to understand that
setUserAgent defines a specific user agent for the page, whereas
setViewport modifies the viewport definition of the page. In case of multiple pages, each one has its own user agent and viewport definition.
Here’s the result of the code example above:
Indeed, the console panel shows us that the page is opened with the right user agent and viewport size.
The truth is that we don’t have to specify the iPhone X’s descriptions explicitly, because the library arrives with a built-in list of device descriptors. On top of that, it provides a method called
emulate which is practically a shortcut for invoking
setViewport, one after another.
Let’s use that:
It’s merely changed to pass the boilerplate descriptor to
emulate (instead of declaring that explicitly).
Notice we import the descriptors out of
Page class supports emitting of various events by actually extending the Node.js’s
This means we can use the natively supported methods in order to handle these events - such as:
removeListener and so on.
Here’s the list of the supported events:
From looking at the list above - we clearly understand that the supported events include aspects of loading, frames, metrics, console, errors, requests, responses and even more!
Let’s simulate and trigger part of the events by adding this script:
As we probably know,
evaluate just executes the supplied script within the page context.
Though, the output is going to reflect the events we listen:
In case you wonder - it’s possible to listen for custom events that are triggered in the page. Basically it means to define the event handler on page’s window using the
Check out this example to understand exactly how to implement it.
In general, the mouse controls the motion of a pointer in two dimensions within a viewport.
Unsurprisingly, Puppeteer represents the mouse by a class called
Page instance has a
Mouse - which allows performing operations such as changing its position and clicking within the viewport.
Let’s start with changing the mouse position:
The scenario we simulate is moving the mouse over the second link of the left API sidebar. We set a viewport size and wait explicitly for the sidebar component to ensure it’s really rendered.
Then, we invoke
move in order to position the mouse with appropriate coordinates, that actually represent the center of the second link.
This is the expected result:
Although it’s hard to see, the second link is hovered as we planned.
The next step is simply clicking on the link by the respective coordinates:
Instead of changing the position explicitly, we just use
click - which basically triggers
mouseup events, one after another.
Note: We delay the pressing in order to demonstrate how to modify the click behavior, nothing more. It’s worth pointing out that we can also control the mouse buttons (left, center, right) and the number of clicks.
Another nice thing is the ability to simulate a drag and drop behavior easily:
All we do is using the
Mouse methods for grabbing the mouse, from one position to another, and afterward releasing it.
The keyboard is another way to interact with the page, mostly for input purposes.
Similar to the mouse, Puppeteer represents the keyboard by a class called
Keyboard - and every
Page instance holds such an instance.
Let’s type some text within the search input:
Notice that we wait for the toolbar (instead of the API sidebar). Then, we focus the search input element and simply type a text into it.
On top of typing text, it’s obviously possible to trigger keyboard events:
Basically, we press
ArrowDown twice and
Enter in order to choose the third search result.
See that in action:
By the way, it’s nice to know that there is a list of the key codes.
Taking screenshots through Puppeteer is a quite easy mission.
The API provides us a dedicated method for that:
As we see, the
screenshot method makes all the charm - whereas we just have to insert a path for the output.
Moreover, it’s also possible to control the type, quality and even clipping the image:
Here’s the output:
Puppeteer is either useful for generating a PDF file from the page content.
Let’s demonstrate that:
Many websites customize their content based on the user’s geolocation.
Modifying the geolocation of a page is pretty obvious:
First, we grants the browser context the appropriate permissions. Then, we use
setGeolocation to override the current geolocation with the coordinates of the north pole.
Here’s what we get when printing the location through
The accessibility tree is a subset of the DOM that includes only elements with relevant information for assistive technologies such as screen readers, voice controls and so on. Having the accessibility tree means we can analyze and test the accessibility support in the page.
When it comes to Puppeteer, it enables to capture the current state of the tree:
The snapshot doesn’t pretend to be the full tree, but rather including just the interesting nodes (those which are acceptable by most of the assistive technologies).
Note: We can obtain the full tree through setting
interestingOnly to false.
The code coverage feature was introduced officially as part of Chrome v59 - and provides the ability to measure how much code is being used, compared to the code that is actually loaded. In this manner, we can reduce the dead code and eventually speed up the loading time of the pages.
With Puppeteer, we can manipulate the same feature programmatically:
Thereafter, we define
calculateUsedBytes which goes through a collected coverage data and calculates how many bytes are being used (based on the coverage).
At last, we merely invoke the created function on both coverages.
Let’s look at the output:
As expected, the output contains
totalBytes for each file.
One objective of measuring performance in terms of websites is to analyze how a page performs, during load and runtime - intending to make it faster.
Let’s see how we use Puppeteer to measure our page performance:
1️⃣ - Analyzing load time through metrics
Navigation Timing is a Web API that provides information and metrics relating to page navigation and load events, and accessible by
In order to benefit from it, we should evaluate this API within the page context:
Notice that if
evaluate receives a function which returns a non-serializable value - then
evaluate returns eventually
That’s exactly why we stringify
window.performance when evaluating within the page context.
The result is transformed into a comfy object, which looks like the following:
Now we can simply combine these metrics and calculate different load times over the loading timeline.
loadEventEnd - navigationStart represents the time since the navigation started until the page is loaded.
Note: All explanations about the different timings above are available here.
2️⃣ - Analyzing runtime through metrics
As far as the runtime metrics, unlike load time, Puppeteer provides a neat API:
We invoke the
metrics method and get the following result:
The interesting metric above is apparently
JSHeapUsedSize which represents, in other words, the actual memory usage of the page.
Notice that the result is actually the output of
Performance.getMetrics, which is part of Chrome DevTools Protocol.
3️⃣ - Analyzing browser activities through tracing
Chromium Tracing is a profiling tool that allows recording what the browser is really doing under the hood - with an emphasis on every thread, tab, and process. And yet, it’s reflected in Chrome DevTools as part of the Timeline panel.
Furthermore, this tracing ability is possible with Puppeteer either - which, as we might guess, practically uses the Chrome DevTools Protocol.
For example, let’s record the browser activities during navigation:
When the recording is stopped, a file called
trace.json is created and contains the output that looks like:
Now that we’ve the trace file, we can open it using Chrome DevTools, chrome://tracing or Timeline Viewer.
Here’s the Performance panel after importing the trace file into the DevTools:
We introduced today the Puppeteer’s API through concrete examples.
Let’s recap the main points:
- Puppeteer is a Node.js library for automating, testing and scraping web pages on top of the Chrome DevTools Protocol.
- Puppeteer’s ecosystem provides a lightweight package,
puppeteer-core, which is a library for browser automation - that interacts with any browser, which is based on DevTools protocol, without installing Chromium.
- Puppeteer’s ecosystem provides a package, which is actually the full product, that installs Chromium in addition to the browser automation library.
- Puppeteer provides the ability to launch a Chromium browser instance or just connect an existing instance.
- Puppeteer’s ecosystem provides an experimental package,
puppeteer-firefox, that interacts with Firefox.
- The browser context allows separating different sessions for a single browser instance.
- Puppeteer launches the browser in a headless mode by default, which merely uses the command line. Also - a headful mode, for opening the browser with a GUI, is supported either.
- Puppeteer provides several ways to debug our application in the browser, whereas, debugging the process that executes Puppeteer is obviously the same as debugging a regular Node.js process.
- Puppeteer allows navigating to a page by a URL and operating the page through the mouse and keyboard.
- Puppeteer allows examining a page’s visibility, behavior and responsiveness on various devices.
- Puppeteer allows taking screenshots of the page and generating PDFs from the content, easily.
- Puppeteer allows analyzing and testing the accessibility support in the page.
- Puppeteer allows speeding up the page performance by providing information about the dead code, handy metrics and manually tracing ability.
And finally, Puppeteer is a powerful browser automation tool with a pretty simple API. A decent number of capabilities are supported, including such we haven’t covered at all - and that’s why your next step could definitely be the official documentation. 😉
Here’s attached the final project:
VS Code Snippets
Well, if you wish to get some useful code snippets of Puppeteer API for Visual Studio Code - then the following extension might interest you:
You’re welcome to take a look at the extension page.