simple_service
directory of the example. For the sake of simplicity, all the code is in Python and to limit the size of the VM image the providers need to download, we're relying on an official Python 3.8 slim image plus two additional Python libraries: numpy
and matplotlib
.simple_service/simple_service.py
file in the example's directory. It's a simple script with a command-line interface which accepts one of five commands. The usage is as follows:--init
is only ever executed once and creates an extremely simple sqlite database (one table with just two columns, the value and the timestamp of an observation) to hold observed values in,--add
adds a single value to the stored list of observations,--stats
provides a JSON structure containing a few metrics that reflect the characteristics of the distribution of the accumulated values (like the mean, deviation and the number of observations so far),--plot
produces a graphical plot of accumulated observations (either as a time-series or as a distribution plot),--dump
dumps the JSON-serialized contents of the whole database.--stats
:--plot
:--add
parameter.simple_service/simulate_observations.py
:simple_service/simulate_observations_ctl.py
, the role of which is to enable starting and stopping the daemon. It's again a CLI tool that accepts just one of the two parameters: --start
and --stop
which, quite obviously, respectively start and stop the background process which generates the simulated observations.simple_service/simple_service.Dockerfile
), built on top of Python 3.8's slim image.numpy
and matplotlib
,ENTRYPOINT
statement, though included, is there just for convenience when testing the image in Docker itself. That entrypoint is ignored by Golem's VM runtime and all commands need to be refered to by their absolute paths (/golem/run...
)get_payload
hook to point to your just-uploaded image:Golem
and Service
.Golem
is the new main entrypoint of the Golem's high-level API that holds the configuration of the API, provides orchestration of both task-based and service-based jobs and keeps track of agreements, payments and all other internal mechanisms while supplying the developer with a simple interface to run your services on. Most importantly, we'll be interested in its run_service
method which spawns one or more instances of a service based on user's specification.Service
, on the other hand is the base class providing the interface through which the specific user's service classes can be defined. It allows the developer to define the payload the provider needs to run - e.g. a VM runtime with a given VM image - and exposes three discrete handlers responsible for interacting with that payload using exescript commands during the service's startup, regular run and shutdown.Service
class and define some constants that refer to paths of executables that we'll want to run in our VM image .vm.repo
) and using the hash of the file uploaded to Golem's image repository.Payload
and contains a set of properties and constraints that define the execution environment in which we want our service to run. The vm.repo
function does exactly that for a VM runtime but as long as the requestor and provider agree, it can be almost anything. We'll be showing you how to define your own provider-end runtime and interact with it from the requestor's end in one of our future tutorials.simulate_observations_ctl.py
script which, as described above, starts the background job that generates our observations and adds them to the service's database. As this is the first command issued, the work context will implicitly prepend our batch with deploy
and start
commands, which (unless we need to parametrize the start
call - not needed for a VM runtime) is exactly what we need.run
handler.asyncio.sleep(10)
), we're going to run two consecutive batches of commands.simple_service.py
with first --stats
and then --plot
arguments to first retrieve a JSON dictionary of the observations' statistical metrics and then to generate a PNG plot depicting that distribution.script.run()
here are saved in which enables us to later retrieve the results from these commands. We then await those future results and when the scripts finishes the execution, we receive the captured results
and extract the standard output of both of those commands.--stats
is printed as is and the output of --plot
is used by another batch:--plot
command generated. Here we don't need to capture any results as the effect is the PNG file that's downloaded to our local filesystem.SimpleService
).Cluster
which is an object which can be queried to get the state of and to control the instances that we thus commissioned.start
", "run
" and "shutdown
" handlers on them.main
function in simple_service.py
is just devoted to tracking the state of the commissioned instances for a short while and then instructing the cluster to stop: