Golem is a global, open-source, decentralized supercomputer that anyone can access. It connects individual machines - be that laptops, home PCs or even data centers - to form a vast network, the purpose of which is to provide a way to distribute computations to its provider nodes and allow requestors to utilize its unique potential - which can lie in its combined computing power, the geographical distribution or its censorship resistance.
Golem's requestor-side configuration consists of two separate components:
yagna daemon - your node in the
new Golem network, responsible for communication with the other nodes, running the
market and providing easy access to the payment mechanisms.
the requestor agent - the part that the developer of the specific Golem application
is responsible for.
The daemon and the requestor agent communicate using three REST APIs which
yapapi - Golem's Python high-level API - aims to abstract to large extent to make application development on Golem as easy as possible.
prepare your payload - this needs to be a Docker image containing your application
that will be executed on the provider's end. This image needs to have its volumes
mapped in a way that will allow the supervisor module to exchange data (write and
read files) with it. This image needs to be packed and uploaded into Golem's image repository
using our dedicated tool -
create your requestor agent - this is where
yapapi comes in. Utilizing our high-level
API, the creation of a requestor agent should be straightforward and require minimal effort.
You can use examples contained in this repository as references, the directory
contains minimal examples of fully functional requestor agents and
is therefore the best place to start exploring.
There are a few components that are crucial for any requestor agent app:
The heart of the high-level API is the
Golem class (
yapapi.Golem), which serves as the "engine" of a requestor agent.
Golem is responsible for finding providers interested in the jobs you want to execute, negotiating agreements with them and processing payments. It also implements core functionality required to execute commands on providers that have signed such agreements.
Golem provides two entry points for executing jobs on the Golem network, corresponding to the two basic modes of operation of a requestor agent:
execute_tasks allows you to submit a task-based job for execution. Arguments to this method must include a sequence of independent tasks (units of work) to be distributed among providers, a payload (a VM image) required to compute them, and a worker function, which will be used to convert each task to a sequence of steps to be executed on a provider. You may also specify the timeout for the whole job, the maximum number of providers used at any given time, and the maximum amount that you want to spend.
run_service allows you, as you probably guessed, to run a service on Golem. Instead of a task-processing worker function, an argument to
run_service is a class (a subclass of
yapapi.Service) that implements the behaviour of your service in various stages of its lifecycle (when it's starting, running etc.). Additionally, you may specify the number of service instances you want to run and the service expiration datetime.
Prior to version
0.6.0, only task-based jobs could be executed. For more information on both types of jobs please refer to our handbook.
The worker will most likely be the very core of your task-based requestor app. You need to define this function in your agent code and then you pass it (as the value of the
worker parameter) to the
execute_tasks method of
The worker receives a work context (
yapapi.WorkContext) object that serves as an interface between your script and the execution unit within the provider. Using the work context, you define the steps that the provider needs to execute in order to complete the job you're giving them - e.g. transferring files to and from the provider or running commands within the execution unit on the provider's end.
Depending on the number of workers, and thus, the maximum number of providers that
execute_tasks utilizes in parallel, a single worker may tackle several tasks and you can differentiate the steps that need to happen once per worker run, which usually means once per provider node - but that depends on the exact implementation of your worker function - from those that happen for each task. An example of the former would be an upload of a source file that's common to each task; and of the latter - a step that triggers the processing of the file using a set of parameters specified for a particular task.
The task (
yapapi.Task) object describes a unit of work that your application needs to carry out.
Golem will feed an instance of your worker - bound to a single provider node - with
Task objects. The worker will be responsible for completing those tasks. Typically, it will turn each task into a sequence of steps to be executed in a single run of the execution script on a provider's machine, in order to compute the task's result.
An example task-based Golem application, using a minimal Docker image (Python file with the example and the Dockerfile for the image reside in
import asynciofrom typing import AsyncIterablefrom yapapi import Golem, Task, WorkContextfrom yapapi.log import enable_default_loggerfrom yapapi.payload import vmasync def worker(context: WorkContext, tasks: AsyncIterable[Task]):async for task in tasks:context.run("/bin/sh", "-c", "date")future_results = yield context.commit()results = await future_resultstask.accept_result(result=results[-1])async def main():package = await vm.repo(image_hash="d646d7b93083d817846c2ae5c62c72ca0507782385a2e29291a3d376",)tasks = [Task(data=None)]async with Golem(budget=1.0, subnet_tag="devnet-beta.2") as golem:async for completed in golem.execute_tasks(worker, tasks, payload=package):print(completed.result.stdout)if __name__ == "__main__":enable_default_logger(log_file="hello.log")loop = asyncio.get_event_loop()task = loop.create_task(main())loop.run_until_complete(task)