G
G
Golem SDK
Search…
Service Example 3: VPN - Minimalistic HTTP proxy
In this example, we're showcasing how the VPN functionality can be used to enable the local requestor agent to serve as a HTTP proxy for web servers running on the provider nodes.
The example depicts the following features:
    Golem VPN
    Service execution
Full code of the example is available in the yapapi repository: https://github.com/golemfactory/yapapi/tree/b0.7/examples/http-proxy

Prerequisites

As with the other examples, we're assuming here you already have your yagna daemon set-up to request the test tasks and that you were able to configure your Python environment to run the examples using the latest version of yapapi. If this is your first time using Golem and yapapi, please first refer to the resources linked above.

The VM image

For the VM image of this example, we're going to use a stock Docker image of the nginx HTTP server. Thus, our Dockerfile consists of a single line:
1
FROM nginx:stable-alpine
Copied!
In the example code, we're already using a pre-built and pre-uploaded Golem VM image but if you'd like to experiment with other HTTP servers or web-based applications, please follow our guides on the preparation of your own VM images for Golem:

The Code

What we'll want to achieve here is threefold - first, we'll want to define our service so that it runs our web content (here it's just a static, albeit customized, HTML page) on the provider node.
Secondly, we'll need a local HTTP server (based on the aiohttp library) listening to connections on our requestor machine (the localhost).
And thirdly, we'll show you how to distribute requests from the local HTTP server to the provider nodes.

Running the remote HTTP server

Since, as mentioned above, we're mostly interested in running the stock nginx HTTP server image, there's not much to do besides defining the payload which uses the hash of our pre-uploaded nginx image:
1
class HttpService(Service):
2
@staticmethod
3
async def get_payload():
4
return await vm.repo(
5
image_hash="16ad039c00f60a48c76d0644c96ccba63b13296d140477c736512127",
6
# we're adding an additional constraint to only select those nodes that
7
# are offering VPN-capable VM runtimes so that we can connect them to the VPN
8
capabilities=[vm.VM_CAPS_VPN],
9
)
Copied!
The important part of the above payload definition is the addition of the capabilities constraint which specifies that we only want to deploy our image on those providers on which the VM runtime supports the new VPN functionality.

Remote http server initialization

Additionally, because so far, the GVMI format for VM images and yagna's VM runtime responsible for running the VM containers on the provider nodes does not support Docker's ENTRYPOINT and CMD commands, we'll need to start the nginx HTTP server with explicit execution script commands after the image is deployed. That's what the start handler for our Service is doing:
1
async def start(self):
2
# perform the initialization of the Service
3
# (which includes sending the network details within the `deploy` command)
4
async for script in super().start():
5
yield script
6
7
# start the remote HTTP server and give it some content to serve in the `index.html`
8
script = self._ctx.new_script()
9
script.run("/docker-entrypoint.sh")
10
script.run("/bin/chmod", "a+x", "/")
11
msg = f"Hello from inside Golem!\n... running on {self.provider_name}"
12
script.run(
13
"/bin/sh",
14
"-c",
15
f"echo {shlex.quote(msg)} > /usr/share/nginx/html/index.html",
16
)
17
script.run("/usr/sbin/nginx"),
18
yield script
Copied!
The first two lines (4-5 above) ensure that the default start handler, which sends the start and deploy commands gets correctly executed and the script it generates sent for execution.
The remainder of the method:
    calls the script (originally specified in the ENTRYPOINT command in the original Dockerfile) which configures the nginx daemon.
    sets up the correct permissions on the root directory so that the nginx daemon can access the directory containing the content
    creates the index.html file customized with the name of the provider node on which the server is running
    and finally, launches the nginx HTTP server
We don't need to specify the contents of the run handler for the service, since, after the HTTP daemon is started, there are no more scripts that we'll want to execute on the VM and we'll only need to communicate with the server using regular HTTP requests within our VPN.

Running the local HTTP server

Okay, now we have taken care of the provider-end, we need to provide the code which runs the local server.
We're using the aiohttp library to define a very simple TCP server which will listen to requests coming to a port on our localhost.
First, let's define the handler that will receive the local requests and generate responses:
1
request_count = 0
2
3
4
async def request_handler(cluster: Cluster, request: web.Request):
5
global request_count
6
7
print(f"{TEXT_COLOR_GREEN}local HTTP request: {dict(request.query)}{TEXT_COLOR_DEFAULT}")
8
9
instance: HttpService = cluster.instances[request_count % len(cluster.instances)]
10
request_count += 1
11
response = await instance.handle_request(request.path_qs)
12
return web.Response(text=response)
Copied!
As you can see, it's main job is to select (in a round-robin fashion) an instance of our HttpService and call its handle_request method using the path and the query string of the incoming HTTP request. Once the request is handled by the instance, an aiohttp.webResponse is returned.
Secondly, we need to provide a small bit of boilerplate that launches the local TCP server for us:
1
async def run_local_server(cluster: Cluster, port: int):
2
"""
3
run a local HTTP server, listening on `port`
4
and passing all requests through the `request_handler` function above
5
"""
6
handler = functools.partial(request_handler, cluster)
7
runner = web.ServerRunner(web.Server(handler))
8
await runner.setup()
9
site = web.TCPSite(runner, port=port)
10
await site.start()
11
12
return site
Copied!
Again, what it does it just define a local server that listens on the provided local port and uses the handler which we defined above to process all incoming requests.

The proxy

Now we need to pass the local requests to the remote servers running on provider machines. That's the job of the handle_request method of our HttpServer class:
1
async def handle_request(self, query_string: str):
2
"""
3
handle the request coming from the local HTTP server
4
by passing it to the instance through the VPN
5
"""
6
instance_ws = self.network_node.get_websocket_uri(80)
7
app_key = self.cluster._engine._api_config.app_key
8
9
print(f"{TEXT_COLOR_GREEN}sending a remote request to {self}{TEXT_COLOR_DEFAULT}")
10
ws_session = aiohttp.ClientSession()
11
async with ws_session.ws_connect(
12
instance_ws, headers={"Authorization": f"Bearer {app_key}"}
13
) as ws:
14
await ws.send_str(f"GET {query_string} HTTP/1.0\n\n")
15
headers = await ws.__anext__()
16
print(f"{TEXT_COLOR_GREEN}remote headers: {headers.data} {TEXT_COLOR_DEFAULT}")
17
content = await ws.__anext__()
18
data: bytes = content.data
19
print(f"{TEXT_COLOR_GREEN}remote content: {data} {TEXT_COLOR_DEFAULT}")
20
response_text = data.decode("utf-8")
21
print(f"{TEXT_COLOR_GREEN}local response: {response_text}{TEXT_COLOR_DEFAULT}")
22
23
await ws_session.close()
24
return response_text
Copied!
The first thing we're doing here is getting the URI for the websocket which allows us to connect to the remote node. We use a helper method of the network_node record of the Service. As the URI is also an endpoint of our REST API, we need to pass it the API key - the same key that we get from our yagna daemon and that we provide to the requestor agent script using the YAGNA_APPKEY environment variable.
Next we establish a websocket connection using the aforementioned endpoint and the API key. Within the connection, we perform a HTTP/1.0 GET request using the path and the query string that we have received from the local HTTP server.
Once we receive a response from the remote HTTP server, we generate the response text which our previously-defined request handler will pass as the response of the local HTTP server.
That's all there is to it. What remains now is to run use the Golem engine to start our service.

Launching the service

To launch the service, we first need to initialize Golem:
1
async with Golem(
2
budget=1.0,
3
subnet_tag=subnet_tag,
4
payment_driver=payment_driver,
5
payment_network=payment_network,
6
) as golem:
7
commissioning_time = datetime.now()
Copied!
Once launched, we use it to perform two actions which will launch our service.
The first one creates the VPN through which the provider nodes will be part of and through which the requestor will be able to communicate with them:
1
network = await golem.create_network("192.168.0.1/24")
Copied!
The second one instructs the Golem engine to commission the service instances and - by passing the network created above in the network argument - connect them to the just-created VPN:
1
cluster = await golem.run_service(HttpService, network=network, num_instances=num_instances)
Copied!
After the above call, the engine publishes the appropriate demand, signs agreements and finally commissions the instances of our service.

Waiting for the service to start

Now we wait...
1
def instances():
2
return [f"{s.provider_name}: {s.state.value}" for s in cluster.instances]
3
4
def still_starting():
5
return len(cluster.instances) < num_instances or any(
6
s.state == ServiceState.starting for s in cluster.instances
7
)
8
9
# wait until all remote http instances are started
10
11
while still_starting() and datetime.now() < commissioning_time + STARTING_TIMEOUT:
12
print(f"instances: {instances()}")
13
await asyncio.sleep(5)
14
15
if still_starting():
16
raise Exception(
17
f"Failed to start instances after {STARTING_TIMEOUT.total_seconds()} seconds"
18
)
Copied!

Starting the local HTTP server

Once the commissioned instances of our service are up and running, we can finally start the local HTTP server and announce its creation to the console:
1
site = await run_local_server(cluster, port)
2
3
print(
4
f"{TEXT_COLOR_CYAN}Local HTTP server listening on:\nhttp://localhost:{port}{TEXT_COLOR_DEFAULT}"
5
)
Copied!
Then, again, we wait until the script is stopped with a Ctrl-C, allowing the service to run and our simple proxy to pass requests and responses between the local server and the remote ones.
At this stage, you can see it in action yourself by connecting to the address displayed. By default its location should be http://localhost:8080.

Cleaning up

Finally, once Ctrl-C is send, it's appropriate to stop and clean-up our whole carefully-set-up machinery.
First we stop the local HTTP server:
1
await site.stop()
2
print(f"{TEXT_COLOR_CYAN}HTTP server stopped{TEXT_COLOR_DEFAULT}")
Copied!
Then we signal for the cluster to stop all of our service instances running on the provider nodes and wait for them to actually shut themselves down:
1
cluster.stop()
2
3
cnt = 0
4
while cnt < 3 and any(s.is_available for s in cluster.instances):
5
print(instances())
6
await asyncio.sleep(5)
7
cnt += 1
Copied!
With all service instances stopped, we can finally shut down and remove the VPN we had created at the very beginning:
1
await network.remove()
Copied!
That's it. We have demonstrated a way to launch services on VM containers running within the provider nodes and to connect them using a VPN.
Last modified 6d ago