Neon is now SOC 2 Type 2 compliant 🚀Read more

Postgres sample data

Download sample data for learning, testing, and exploring Neon

This guide describes how to download and install sample data for use with Neon.

Prerequisites

  • wget for downloading datasets, unless otherwise instructed. If your system does not support wget, you can paste the source file address in your browser's address bar.
  • A psql client for connecting to your Neon database and loading data. This client is included with a standalone PostgreSQL installation. See PostgreSQL Downloads.
  • A pg_restore client if you are loading the employees or postgres_air database. The pg_restore client is included with a standalone PostgreSQL installation. See PostgreSQL Downloads.
  • A Neon database connection string. After creating a database, you can obtain the connection string from the Connection Details widget on the Neon Dashboard. In the instructions that follow, replace postgres://[user]:[password]@[neon_hostname]/[dbname] with your connection string.
  • A Neon Pro account if you intend to install a dataset larger than 3 GB.
  • Instructions for each dataset require that you create a database. You can do so from a client such as psql or from the Neon SQL Editor.

Sample data

Sample datasets are listed in order of the smallest to largest installed size. Please be aware that the Neon Free Tier has a storage limit of 3 GB per branch. Datasets larger than 3 GB cannot be loaded on the Free Tier.

NameTablesRecordsSource file sizeInstalled size
Periodic table data111817 KB7.2 MB
World Happiness Index11569.4 KB7.2 MB
Titanic passenger data11309220 KB7.5 MB
Netflix data188073.2 MB11 MB
Pagila database33623223 MB15 MB
Chinook database11779291.8 MB17 MB
Lego database863325013 MB42 MB
Employees database6391901534 MB333 MB
Wikipedia vector embeddings1250001.7 GB850 MB
Postgres air10672286001.2 GB6.7 GB

note

Installed size is measured using the query: SELECT pg_size_pretty(pg_database_size('your_database_name')). The reported size for small datasets may appear larger than expected due to inherent Postgres storage overhead.

Periodic table data

A table containing data about the periodic table of elements.

  1. Create a periodic_table database:

  2. Download the source file:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

  4. Connect to the periodic_table database:

  5. Look up the the element with the Atomic Number 10:

World Happiness Index

A dataset with multiple indicators for evaluating the happiness of countries of the world.

  1. Create a world_happiness database:

  2. Download the source file:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

  4. Connect to the titanic database:

  5. Find the countries where the happiness score is above average but the GDP per capita is below average:

Titanic passenger data

A dataset containing information on the passengers aboard the RMS Titanic, which sank on its maiden voyage in 1912.

  1. Create a titanic database:

  2. Download the source file:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

  4. Connect to the titanic database:

  5. Query passengers with the most expensive fares:

Netflix data

A dataset containing information about movies and tv shows on Netflix.

  1. Create a netflix database:

  2. Download the source file:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

  4. Connect to the netflix database:

  5. Find the directors with the most movies in the database:

Pagila database

Sample data for a fictional DVD rental store. Pagila includes tables for films, actors, film categories, stores, customers, payments, and more.

  1. Create a pagila database:

  2. Download the source file:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

  4. Connect to the pagila database:

  5. Find the top 10 most popular film categories based on rental frequency:

Chinook database

A sample database for a digital media store, including tables for artists, albums, media tracks, invoices, customers, and more.

  1. Create a chinook database:

  2. Download the source file:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

  4. Connect to the chinook database:

  5. Find out the most sold item by track title:

Lego database

A dataset containing information about various LEGO sets, their themes, parts, colors, and other associated data.

  1. Create a lego database:

  2. Download the source file:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

  4. Connect to the lego database:

  5. Find the top 5 LEGO themes by the number of sets:

Employees database

A dataset containing details about employees, their departments, salaries, and more.

  1. Create the database and schema:

  2. Download the source file:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

    Database objects are created in the employees schema rather than the public schema.

  4. Connect to the employees database:

  5. Find the top 5 departments with the highest average salary:

  • Source: The initial dataset was created by Fusheng Wang and Carlo Zaniolo from Siemens Corporate Research. Designing the relational schema was undertaken by Giuseppe Maxia while Patrick Crews was responsible for transforming the data into a format compatible with MySQL. Their work can be accessed here: https://github.com/datacharmer/test_db. Subsequently, this information was adapted to a format suitable for PostgreSQL: https://github.com/h8/employees-database. The data was generated, and there are inconsistencies.
  • License: This work is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/ or send a letter to Creative Commons, 171 Second Street, Suite 300, San Francisco, California, 94105, USA.

Wikipedia vector embeddings

An OpenAI example dataset containing pre-computed vector embeddings for 25000 Wikipedia articles. It is intended for use with the pgvector Postgres extension, which you must install first to create a table with vector type columns. For a Jupyter Notebook that uses this dataset with Neon, refer to the following GitHub repository: neon-vector-search-openai-notebooks

  1. Download the zip file (~700MB):

  2. Navigate to the directory where you downloaded the zip file, and run the following command to extract the source file:

  3. Create a wikipedia database:

  4. Connect to the wikipedia database:

  5. Install the pgvector extension:

  6. Create the following table in your database:

  7. Create vector search indexes:

  8. Navigate to the directory where you extracted the source file, and run the following command:

note

If you encounter a memory error related to the maintenance_work_mem setting, refer to Indexing vectors for how to increase this setting.

Postgres air database

An airport database containing information about airports, aircraft, bookings, passengers, and more.

  1. Download the file (1.3 GB) from Google drive

  2. Create a postgres_air database:

  3. Navigate to the directory where you downloaded the source file, and run the following command:

    Database objects are created in a postgres_air schema rather than the public schema.

  4. Connect to the postgres_air database:

  5. Find the aircraft type with the most flights:

Need help?

Join the Neon community forum to ask questions or see what others are doing with Neon. Neon Pro Plan users can open a support ticket from the console. For more detail, see Getting Support.

Last updated on

Edit this page
Was this page helpful?