aboutsummaryrefslogtreecommitdiff
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md43
1 files changed, 35 insertions, 8 deletions
diff --git a/README.md b/README.md
index 1f0b199..95c842c 100644
--- a/README.md
+++ b/README.md
@@ -13,7 +13,7 @@ then access [localhost:3000/swagger-ui](http://localhost:3000/swagger-ui/index.h
## Stack
I chose [luminus](http://www.luminusweb.net/) for my stack as it makes the initial setup way easier since it provide a wide range option of configuration for a bunch of technologies.
-To bootstrap the project I used `lein new luminus queue-api +swagger +service +kibit` plus datascrypt, which by default doesn't come with Luminus.
+To bootstrap the project I used `lein new luminus queue-api +swagger +service +kibit` plus datascrypt which doesn't come with Luminus.
### +Swagger
@@ -51,10 +51,10 @@ Project has two models:
* `:unassigned` it is waiting to be assigned
* `:processing` it is being precessed by an agent
* `:completed` it has been done with.
- * `:job/agent` reference a job that is processing this job or had processed it. it is nil when `:unassigned`
+ * `:job/agent` reference a job that is processing this job or had processed it.
* `:job/type` type of the job that it can perform
- * `:job/date` date time when job has entered into the system.
- * `:job/urgent` urgent flag that tell when a job has priority over other not urgent ones
+ * `:job/date` date time for when job has entered the system.
+ * `:job/urgent` urgent flag that tell when a job has higher priority.
Those models wrap up in schema:
@@ -70,7 +70,7 @@ Those models wrap up in schema:
### services.clj
After all luminus file there is actually two files that have the core logic of the app `services.clj` and `db/core.cljs`.
-For `services.clj` it holds all code for endpoint definition and model validation and considering the exercise requirements we gonna need 5 endpoints:
+For `services.clj` it holds all code for endpoint definition and model validation. considering the exercise requirements we gonna need 5 endpoints:
* Endpoint to add agent is a`:put` at `/agent`
* Endpoint to get how many jobs of each type this agent has performed is a `:post` at `/agent`. Note: usually since it is a method that doesn't modify anything I'd have used `:get` and pass the agent id via path (`/agent/:id`) but one of the requirement is "*All endpoints should accept and return JSON content type payloads*" I worked with POST PUT.
@@ -82,8 +82,8 @@ For model and validation details access swagger-ui.
### db/core.clj
-Core.clj holds all logic to interact with Datascrip therefore all the code to manage with the queue.
-The idea behind it is actually simpler than part 1 since Datascrip handle all data storing process.
+Core.clj holds all logic to interact with Datascrip therefore all the code to manage the queue.
+The idea behind it is actually simpler than part 1 since Datascrip handle the hard word.
For example, to store jobs and agents I'd simply `transact!` the entire object and we good to go.
```clojure
@@ -95,7 +95,34 @@ For example, to store jobs and agents I'd simply `transact!` the entire object a
:job/status :unassigned}])
```
+While with a simple query I can fetch the exact piece of information that I need for the moment.
+
+```clojure
+(d/q '[:find ?d ?id
+ :where
+ [?e :job/date ?d]
+ [?e :job/id ?id]
+ [?e :job/urgent false]
+ [?e :job/status :unassigned]
+ [?e :job/type ?t]
+ [(clojure.string/includes? ["bills_request" "rewords_request"] ?t)]]
+ @conn)
+```
+
## Testing
-To be continued...
+Every test case start with pristine database then set up all data needed to test all possible (at least all I could think of) combinations, once test is done it does it all over again.
+
+```clojure
+(use-fixtures
+ :each
+ (fn [f]
+ (mount/stop #'queue-api.db.core/conn)
+ (mount/start #'queue-api.db.core/conn)
+ (d/transact! conn base-schema)
+ (f)))
+```
+
+Usually there are problems with this approach, it get slower as the system grows, some times you don't the luxury of starting with clean database, or you would require too much data to exist in the database in order to run tests that would become a mess.
+Fortunately this app does not check any of those boxes since it works with in-memory database and it has a very small set of models.