When we visit a website our browser makes an HTTP request for that HTML as well as an subsequent HTTP requests for loading any assets (fonts, images, videos, etc). In the case of our projects (which we host on GitHub's servers), someone visiting our work in their browser send requests to the GitHub servers which send them back the code we wrote (and any other files/assets we might be storing on there), but our code can also make requests to other servers. In this way our work can incorporate data and other assets from various other parts of the web.
There are loads of these sorts of APIs online, apilist.fun and programmableweb.com are just a couple of sites which attempt to aggregate as many of them as they can. The Chicago city also has a REST API which gives us access to all sorts of city data at data.cityofchicago.org
Below you'll find 3 netnet examples, which send a request to the same REST API, called dog.ceo, which returns a random image of a dog. The first uses the older XMLHttpRequest API, the second one uses the Fetch API and the third example uses the newer "async / await" syntax to use the Fetch API with cleaner and easier to read code. All three examples technically do the same thing (the difference is the syntax)
XMLHttpRequest (old way)
Fetch API: then(callback) (newer way)
Fetch API: async/await (newest way)
other data sources
Not all data-driven projects you come across online make use of these 3rd party REST APIs, sometimes data is made available for download so you can host it locally with your project (ie. upload it directly to your GitHub project like you would images or other assets). There are lots of places to find datasets online, one popular repository of data is kaggle.com, or checkout Jeremy Singer's Data Is Plural newsletter where he shares interesting datasets on a weekly basis (every single dataset he's shared in the newsletter previously can be found on this spreadsheet)