This is the repository for the Economic Indicators website, developed by the Ball State University Center for Business and Economic Research.
/src/Endpoints/EndpointGroups.php
defines groups of endpoints that are displayed together on the same page. Each endpoint corresponds to a "series" in the FRED API- The command
bin/cake update_stats
uses these definitions to query the FRED API and populate / update themetrics
andstatistics
tables - Each endpoint has one corresponding metric in the database with a matching
metrics.series_id
value - Each statistic record is associated with one metric, but one of three possible data types, defined in
StatisticsTable
:- Value
- Change since last year
- Percent change since last year
- The
releases
table stores information pulled from the FRED API about dates in which primary sources are expected to release new data (which then becomes available to FRED, and then gets pulled into the Economic Indicators database)
bin/cake update_stats
- Run with the
--only-new
option to only check endpoints with releases on or before today that have not yet been imported. This only adds new stats, rather than also updating existing stats, and doesn't take very much time, so it can be safely automated to run every three hours. - Run without the
--only-new
option to run a "full" update, which pulls all available statistics from FRED and checks for any updates to already-imported data. - Run with the
--auto
option to have the script choose between an--only-new
update and a full update based on how long ago the last full update took place. This is recommended to be run automatically every ~1 hour. - Run with the
--ignore-lock
option to either allow multiple update processes to take place concurrently (not recommended) or to fix a process lock that failed to be cleared by the previous process - Run with the
--choose
option to add/update statistics for a specific endpoint group instead of looping through all endpoint groups.
- Run with the
bin/cake update_release_dates
should be automated to run daily- Run with the
--only-cache
option to only rebuild the cached release calendar instead of pulling new release information from FRED
- Run with the
bin/cake make_spreadsheets
can be run manually if there's a problem with the pre-generated spreadsheets, butbin/cake update_stats
also automatically (re)generates spreadsheets if needed- Run with the
--choose
option to regenerate spreadsheets for a specific endpoint group instead of looping through all endpoint groups. This could fix spreadsheets not being regenerated by the all-groups process due to memory problems. - Run with the
--auto
option to either loop through all spreadsheets that are marked as having updated data or to only re-try generation of a previously failed spreadsheet generation. This can be used to fix some apparently memory-related errors may be caused by the server's resources being overtaxed when processing all endpoint groups. This is recommended to be automatically run every ~30 minutes. - Run with the
--verbose
option to output information about memory usage
- Run with the
bin/cake update_cache
can be run manually to rebuild the cache of query results, thoughbin/cake update_stats
updates the cache automatically, if appropriate- Run with the
--verbose
option to output information about memory usage - Run with the
--choose
option to add/update statistics for a specific endpoint group instead of looping through all endpoint groups.
- Run with the
- All of the above commands additionally have the
--mute-slack
option, which if used prevents messages from being sent to Slack. This is useful when executing these commands on your own development machine rather than on the production server.
- On the production server, replace
bin/cake
withphp bin/cake.php
- Note that the FRED API occasionally fails to return a valid response, in which case these scripts will re-try the same request a limited number of times before giving up. Those requests will then be attempted again the next time an update script is invoked.
- The FRED API request rate limit is nonspecific, but suspected to be 120 requests per minute, there is a delay between every request and a longer delay after any error, so it is not anticipated that errors caused by exceeding the allowed rate will take place as long as only one instance of any update script is running at a time.