queue-api
A simple job queue api
Usage
To run just type
lein run
then access localhost:3000/swagger-ui.
Stack
I chose luminus for my stack as it makes the initial setup way easier, it provides a wide range of profiles for a bunch of technologies.
I bootstrapped the project with lein new luminus queue-api +swagger +service +kibit
plus datascrypt which doesn't come with Luminus.
+Swagger
When possible I always add it to a project for it make easier to visualize and test endpoints.
+Service
To remove all frontend stuff that I don't need it.
+Kibit
It gives some insight how to make you code more idiomatic.
Datascript
Datascript was chosen for its easy setup, after little to no effort I had it working. Even though it was meant to run on browser it fit nicely in the project, and because it works pretty much like Datomic it has powerful query system and works seamlessly with clojure. Additionally it had an okay non-documentation with some samples and if I couldn't find for Datascript I'd search for a Datomic seeing that query system of both are compatible.
Solution
Data structure
Project has two models:
- Agent
:agent/id
unique identification of an agent:agent/name
agent name:agent/primary-skillset
: list of primary skillsets of an agent:agent/secondary-skillset
list of secondary skillsets of an agent:agent/job
reference to job that the agent is processing
- Job
:job/id
unique identification of a job:job/status
status of the job, it can be::unassigned
it is waiting to be assigned:processing
it is being precessed by an agent:completed
it has been done with.
:job/agent
reference a job that is processing this job or had processed it.:job/type
type of the job that it can perform:job/date
date time for when job has entered the system.:job/urgent
urgent flag that tells when a job has higher priority.
Those models wrap up in the following schema to datascript:
{:agent/id {:db/unique :db.unique/identity}
:agent/primary-skillset {:db/cardinality :db.cardinality/many}
:agent/secondary-skillset {:db/cardinality :db.cardinality/many}
:agent/job {:db.valueType :db.type/ref}
:job/id {:db/unique :db.unique/identity}
:job/agent {:db.valueType :db.type/ref}}
services.clj
After all luminus file there is actually two files that have the core logic for the app, services.clj
and db/core.cljs
.
For services.clj
it holds all code for endpoint definition and input validation.
Considering the exercise requirements there is a need for 5 endpoints:
- Endpoint to add agent is a
:put
at/agent
- Endpoint to get how many jobs of each type this agent has performed is a
:get
at/agent/:id
. - Endpoint to add a job is
:put
at/job
- Endpoint to request a job is
:post
at/job
- Endpoint to get current queue state is
:get
at/job
For model and validation details access swagger-ui.
db/core.clj
Core.clj holds all logic to interact with Datascrip therefore all the code to manage the queue.
The idea behind it is actually simpler than part 1 since Datascrip handle the hard word.
For example, to store jobs and agents I'd simply transact!
the entire object and it is good to go.
(d/transact! queue-api.db.core/conn
[{:job/id "319ce5e6-6ed6-4931-8e29-550052b02409"
:job/type "bills-request"
:job/urgent false
:job/date (time/now)
:job/status :unassigned}])
While with a simple query I could fetch the exact piece of information that I need for the moment.
(d/q '[:find ?d ?id
:where
[?e :job/date ?d]
[?e :job/id ?id]
[?e :job/urgent false]
[?e :job/status :unassigned]
[?e :job/type ?t]
[(clojure.string/includes? ["bills_request" "rewords_request"] ?t)]]
@conn)
Testing
Every test case start with pristine database then set up all data needed to test all possible (at least all I could think of) combinations, once test is done it does it all over again.
(use-fixtures
:each
(fn [f]
(mount/stop #'queue-api.db.core/conn)
(mount/start #'queue-api.db.core/conn)
(d/transact! conn base-schema)
(f)))
Usually there are problems with this approach, it get slower as the system grows, some times you don't the luxury of starting with clean database, or you would require too much data to exist in the database in order to run tests that would become a mess. Fortunately this app does not check any of those boxes since it works with in-memory database and it has a very small set of models.