Bulk data documentation
Bulk data files for our whitelisted jurisdictions (currently Illinois and Arkansas) are available to everyone without a login.
Bulk data files for the remaining jurisdictions are available to research scholars who sign a research agreement. You can request a research agreement by creating an account and then visiting your account page.
See our About page for details on our data access restrictions.
To download all cases via the API,
use the body_format and filter_type parameters to the
to select all cases, sorted by jurisdiction, of your desired body_format.
If you are downloading bulk files manually, you may find that the browser times out on the largest files;
in that case, use
wget, which retries when it encounters a network problem. Here's an example for the
U.S. file with case body in text format:
wget --header="Authorization: Token your-api-token" -O "United States-20190418-text.zip" "https://api.case.law/v1/bulk/17050/download/"
In this case, you'd replace
your-api-token with your API token from the user details page.
Each file that we offer for download is equivalent to a particular query to our API. For example, the file
"Illinois-20180829-text.zip" contains all cases that would be returned by
an API query
full_case=true&jurisdiction=ill&body_format=text. We offer files for each possible
jurisdiction value and each possible
reporter value, combined with
The JSON objects returned by the API and in bulk files differ only in that bulk JSON objects do not include
"url" fields, which can be reconstructed from object IDs.
Bulk data files are provided as zipped directories. Each directory is in BagIt format, with a layout like this:
Because the zip file provides no additional compression, we recommend uncompressing it for convenience and keeping the uncompressed directory on disk.
Caselaw data is stored within the
data/data.jsonl.xz file. The
indicates that the file is compressed with xzip, and is a text file where each line represents a JSON object.
Using Bulk Data
However, this increases the disk space needed by about 500%, and in most cases is unnecessary. Instead we recommend interacting directly with the compressed files.
To read the file from the command line, run:
xzcat data/data.jsonl.xz | less
If you install jq you can get nicely formatted output ...
xzcat data/data.jsonl.xz | jq | less
... or run more sophisticated queries. For example, to extract the name of each case:
xzcat data/data.jsonl.xz | jq .name | less
You can also interact directly with the compressed files from code. The following example prints the name of each case using Python:
import lzma, json with lzma.open("data/data.jsonl.xz") as in_file: for line in in_file: case = json.loads(str(line, 'utf8')) print(case['name'])
To load the compressed data file into an R data frame, do something like this:
> install.packages("jsonlite") > library(jsonlite) > ark <- stream_in(xzfile("Arkansas-20190416-text/data/data.jsonl.xz"))