@windingtree/wt-search-api
v0.8.0
Published
NodeJS app that enables quick search over data from Winding Tree platform
Downloads
17
Readme
WT Search API
API written in nodejs to fetch information from the Winding Tree platform.
Work in progress. Discussion on https://groups.google.com/forum/#!topic/windingtree/J_6aRddPVWY
Requirements
- Nodejs >=10
Development
In order to install and run tests, we must:
git clone [email protected]:windingtree/wt-search-api.git
nvm install
npm install
npm test
Running dev mode
With all the dependencies installed, you can start the dev server.
First step is to initialize the SQLite database used to store your data.
If you want to use a different database, feel free to change the connection
settings in the appropriate configuration file in src/config/
.
npm run createdb-dev
If you'd like to start afresh later, just delete the .dev.sqlite
file.
Now we can run our dev server.
npm run dev
By default, this will try to connect to an instance of wt-read-api
running locally on http://localhost:3000
and index the hotels from there.
You can override the default in an appropriate config file in src/config
or with READ_API_URL
environment variable.
Right after you start your node, it will try to sync and index all of the data immediately.
Running this server
Docker
You can run the whole API in a docker container, and you can
control which config will be used by passing an appropriate value
to WT_CONFIG variable at runtime. Database will be setup during the
container startup in the current setup. You can skip this with
SKIP_DB_SETUP
environment variable.
$ docker build -t windingtree/wt-search-api .
$ docker run -p 8080:1918 -e WT_CONFIG=playground windingtree/wt-search-api
After that you can access the wt-search-api on local port 8080
.
NPM
You can install and run this from NPM as well:
$ npm install -g @windingtree/wt-search-api
$ WT_CONFIG=playground wt-search-api
This will also create a local SQLite instance in the directory
where you run the wt-search-api
command. To prevent that,
you can suppress DB creation with SKIP_DB_SETUP
environment
variable.
Running in production
You can customize the behaviour of the instance by many environment
variables which get applied if you run the API with WT_CONFIG=envvar
.
These are:
WT_CONFIG
- Which config will be used. Defaults todev
.PORT
- HTTP Port where the API will listen, defaults to1918
.BASE_URL
- Base URL of this API instance, for examplehttps://playground-search-api.windingtree.com
READ_API_URL
- Read API url, for examplehttps://playground-api.windingtree.com
DB_CLIENT
- Knex database client name, for examplesqlite3
.DB_CLIENT_OPTIONS
- Knex database client options as JSON string, for example{"filename": "./envvar.sqlite"}
.LOG_LEVEL
- Log level, defaults toinfo
.SKIP_DB_SETUP
- Whether to not setup new database upon startup.DEFAULT_PAGE_SIZE
- How many items will be returned by default. Defaults to 30.MAX_PAGE_SIZE
- What's the maximum value oflimit
query parameter. Defaults to 300.SYNC_INTERVAL
- How often will a complete resync occur in seconds. Defaults to one hour.SYNC_INITIAL
- Perform the initial sync immediately after server start? Defaults totrue
.
We recommend to use a more robust database than sqlite3
for any serious
deployment.
Examples
Search and sort by location
The following command will get you the 3 closest hotels to 46.770066, 23.600819 sorted by distance from this point and no further than 30 kilometers.
curl -X GET "https://playground-search-api.windingtree.com/hotels?location=46.770066,23.600819:30&sortByDistance=46.770066,23.600819&limit=3" -H "accept: application/json"
You are not required to use both filters and sorts simultaneously, you can pick one or the other or combine them.
How to add a new index?
This has multiple steps:
- Prepare a db model in
src/db/indexed/models
and register it insrc/db/indexed/index.js
. - Prepare a query parser in
src/services/query-parser/indices
and register it insrc/services/query-parser/index.js
. - Prepare an indexer in
src/services/indexer/indices
and register it insrc/services/indexers/index.js
. - Add a few tests for all components.
- You're good to go!
Proposed architecture
Ideal flow
- Hotel writes its data to the platform with Write API.
- Write API sends notifications about hotel data changes via Update API.
- If a new hotel pops up, Subscriptions Management makes sure that the Search API is tracking all the hotel data changes.
- Subscription handler (or Resync cron job) tells the Crawler to collect the changed data via Read API (currently, the version 0.8.x is assumed).
- Crawler puts a copy of the data to Permanent storage.
- Crawler also bumps the Indexer and Price Computation components to start work with the changed data.
- Indexer re-indexes the hotel data from Permanent Storage to make search easier where possible (such as location data, description for fulltext etc.) and puts them to Indexed Storage.
- Price computation (silly name) re-computes all of the prices based on new hotel information from Permanent Storage and puts them to Price Storage. It's not really clear how it should know which prices it should compute though.
- OTAs (or other users of the system) are posting queries via Query API and they are getting quick responses
Various notes
- Prices (and all other guest-related) query results might be hard to pre-compute. It might be feasible to collect common query types and pre-compute appropriate data for that.
- There's yet no decision on how the Query API will communicate with the outside world. Contenders probably are: REST API with query strings, REST API with custom query language, GraphQL endpoint
- Query API has to offer ways of sorting the data and some relevance score for search results. Also we cannot forget about pagination.
- Resync CRON job has to be in place because there's no guarantee that the outside system is reliable and that every hotel uses Update API.
- Indexed and Price storages are fast, ideally in-memory databases.
- Permanent storage is in place if Indexed Storage and/or Price Storage get somehow corrupted or destroyed. The Search box can re-index the whole WT platform way faster than if it had to get all data from various distributed storages.