{"id":9155,"date":"2022-05-20T19:55:58","date_gmt":"2022-05-20T19:55:58","guid":{"rendered":"https:\/\/reowolf.net\/?p=9155"},"modified":"2022-05-20T19:56:00","modified_gmt":"2022-05-20T19:56:00","slug":"reowolf-2-0-release-notes","status":"publish","type":"post","link":"https:\/\/reowolf.net\/reowolf-2-0-release-notes\/","title":{"rendered":"Reowolf 2.0: Release Notes"},"content":{"rendered":"\n

We are happy to release version 2 of the Reowolf project. This version introduces many new features: a select statement, run-time error handling, dynamic port hand-off, native TCP components and detailed project documentation. This post summarizes the most important features, and further lays out the vision we have for the future of Reowolf. This release is sponsored by the Next Generation Internet<\/a> fund.<\/p>\n\n\n\n

This release can be found on our Gitlab repository<\/a> page. Gitlab includes an issue tracker that is open for users to submit bug reports and feature requests. The release tag is v2.0.1<\/strong>. The software is licensed under the MIT license.<\/p>\n\n\n\n

The following aspects of the Protocol Description Language (PDL) and its supporting run-time and compiler have been improved, and in the sections below we demonstrate their functionality by small examples:<\/p>\n\n\n\n

  1. Select statements<\/li>
  2. Run-time error handling<\/li>
  3. Transmitting ports through ports (dynamic port hand-off)<\/li>
  4. Native components<\/li>
  5. Project documentation<\/li><\/ol>\n\n\n\n

    Furthermore, this release has fixed a number of bugs that were present in previous releases. The final section shows the vision for the future of Reowolf.<\/p>\n\n\n\n

    Select statements<\/h2>\n\n\n\n

    We have reworked the component synchronization mechanism, and the underlying consensus algorithm supporting components.<\/p>\n\n\n\n

    Imagine we instantiate a data_producer<\/code> a number of times (say a<\/em> b<\/em> c<\/em>), and link them up with a data_receiver<\/code>. The data receiver takes a datum from one of the producers, one by one.<\/p>\n\n\n\n

    In the old synchronization mechanism, all data producers had to indicate they were ready to synchronize, even<\/em> when only one producer actually gives data for the receiver to process. So the following example causes the inadvertent synchronization of all participating components, which causes all producing components to wait on each other:<\/p>\n\n\n\n

    comp<\/strong> data_producer(out<\/strong><u64<\/strong>> tx, u64<\/strong> min_val, u64<\/strong> max_val) {\n    while<\/strong> (true<\/strong>) {\n        sync<\/strong> {\n            auto<\/strong> value = lots_of_work(min_val, max_val);\n            put<\/strong>(tx, value);\n        }\n    }\n}\n\ncomp<\/strong> data_receiver_v1(in<\/strong><u64<\/strong>> rx_a, in<\/strong><u64<\/strong>> rx_b, in<\/strong><u64<\/strong>> rx_c, u32<\/strong> num_rounds) {\n    u32<\/strong> counter = 0;\n    auto<\/strong> rxs = { rx_a, rx_b, rx_c };\n    while<\/strong> (counter < num_rounds) {\n        auto<\/strong> num_peers = length<\/strong>(rxs);\n        auto<\/strong> peer_index = 0;\n        while<\/strong> (peer_index < num_peers) {\n            sync<\/strong> {\n                auto<\/strong> result = get<\/strong>(rxs[peer_index]);\n                peer_index += 1;\n            }\n        }\n        counter += 1;\n    }\n}<\/pre>\n\n\n\n

    The reason was that a synchronous interaction checked all<\/em> ports for a valid interaction. So for the round robin receiver we have that it communicates with one peer per round, but it still requires the other peers to agree that they didn’t send anything at all! Note that this already implies that all running components need to synchronize. We could fix this by writing:<\/p>\n\n\n\n

    comp<\/strong> data_receiver_v2(in<\/strong><u64<\/strong>> rx_a, in<\/strong><u64<\/strong>> rx_b, in<\/strong><u64<\/strong>> rx_c, u32<\/strong> num_rounds) {\n    u32<\/strong> counter = 0;\n    auto<\/strong> rxs = { rx_a, rx_b, rx_c };\n    while<\/strong> (counter < num_rounds) {\n        auto<\/strong> num_peers = length<\/strong>(rxs);\n        auto<\/strong> peer_index = 0;\n        sync<\/strong> {\n            while<\/strong> (peer_index < num_peers) {\n                auto<\/strong> result = get<\/strong>(rxs[peer_index]);\n                peer_index += 1;\n            }\n        }\n        counter += 1;\n    }\n}<\/pre>\n\n\n\n

    But this is not the intended behavior. We want the producer components to be able to run independently of one another. This requires a change in the semantics of the language! We no longer have that each peer is automatically dragged into the synchronous round. Instead, after the first message of the peer is received through a get<\/code> call, will we merge each other’s synchronous rounds.<\/p>\n\n\n\n

    With such a change to the runtime, we now have that the first version (written above) produces the intended behavior: the consumer accepts one value and synchronizes with its sender. Then it goes to the next round and synchronizes with the next sender.<\/p>\n\n\n\n

    But what we would really like to do is to synchronize with any of the peers that happens to have its work ready for consumption. And so the select statement is introduced into the language. This statement can be used to describe a set of possible behaviors we could execute. Each behavior will have an associated set of ports. When those associated set of ports have a message ready to be read, then the corresponding behavior will execute. So to complete the example above, we have:<\/p>\n\n\n\n

    comp<\/strong> data_receiver_v3(in<\/strong><u64<\/strong>> rx_a, in<\/strong><u64<\/strong>> rx_b, in<\/strong><u64<\/strong>> rx_c, u32<\/strong> num_rounds) {\n    u32<\/strong> counter = 0;\n    auto<\/strong> rxs = { rx_a, rx_b, rx_c };\n\n    u32<\/strong> received_from_a = 0;\n    u32<\/strong> received_from_b_or_c = 0;\n    u32<\/strong> received_from_a_or_c = 0;\n    u64<\/strong> sum_received_from_c = 0;\n\n    while<\/strong> (counter < num_rounds*3) {\n        sync<\/strong> {\n            select<\/strong> {\n                auto<\/strong> value = get<\/strong>(rx_a) -> {\n                    received_from_a += 1;\n                    received_from_a_or_c += 1;\n                }\n                auto<\/strong> value = get<\/strong>(rx_b) -> {\n                    received_from_b_or_c += 1;\n                }\n                auto<\/strong> value = get<\/strong>(rx_c) -> {\n                    received_from_a_or_c += 1;\n                    received_from_b_or_c += 1;\n                    sum_received_from_c += value;\n                }\n            }\n        }\n        counter += 1;\n    }\n}<\/pre>\n\n\n\n

    Run-time error handling<\/h2>\n\n\n\n

    We have an initial implementation for error handling and reporting. Roughly speaking: if a component has failed then it cannot complete any current or future synchronous rounds anymore. Hence, apart from some edge cases, any (attempted) received message by a peer should cause a failure at that peer as well. We may have a look at the various places where a component can crash, and how its neighboring peer handles receiving messages: sometimes the crash of the first component propagates, and sometimes it is blocked.<\/p>\n\n\n\n

    enum<\/strong> ErrorLocation {\n    BeforeSync,\n    DuringSyncBeforeFirstInteraction,\n    DuringSyncBeforeSecondInteraction,\n    DuringSyncAfterInteractions,\n    AfterSync,\n}\n\nfunc<\/strong> crash() -> u8<\/strong> {\n    return {}[0]; \/\/ access index 0 of an empty array\n}\n\ncomp<\/strong> sender_and_crasher(out<\/strong><u32<\/strong>> value, ErrorLocation loc) {\n    if<\/strong> (loc == ErrorLocation::BeforeSync) { crash(); }\n    sync<\/strong> {\n        if<\/strong> (loc == ErrorLocation::DuringSyncBeforeFirstInteraction) { crash(); }\n        put<\/strong>(value, 0);\n        if<\/strong> (loc == ErrorLocation::DuringSyncBeforeSecondInteraction) { crash(); }\n        put<\/strong>(value, 1);\n        if<\/strong> (loc == ErrorLocation::DuringSyncAfterInteractions) { crash(); }\n    }\n    if<\/strong> (loc == ErrorLocation::AfterSync) { crash(); }\n}\n\ncomp<\/strong> receiver(in<\/strong><u32<\/strong>> value) {\n    sync<\/strong> {\n        auto<\/strong> a = get<\/strong>(value);\n        auto<\/strong> b = get<\/strong>(value);\n    }\n}\n\ncomp<\/strong> main() {\n    channel<\/strong> tx -> rx;\n\n    new<\/strong> sender_and_crasher(tx, ErrorLocation::AfterSync);\n    new<\/strong> receiver(rx);\n}<\/pre>\n\n\n\n

    Note that when we run the example with the error location before sync, or during sync, that the receiver always crashes. However the location where it will crash is somewhat random! Due to the asynchronous nature of the runtime, a sender of messages will always just put<\/code> the value onto the port and continue execution. So even though the sender component might already be done with its sync round, the receiver officially still has to receive its first message. In any case, a neat error message is displayed in the console (or in some other place where such diagnostics are reported).

    Note that, especially, given the asynchronous nature of the runtime, the receiver should figure out when the peer component has crashed, but it can still finish the current synchronous round. This might happen if the peer component crashes just<\/em> after the synchronous round. However, there may be a case where the peer receives the information that the peer crashed before<\/em> it receives the information that the synchronous round has succeeded.<\/p>\n\n\n\n

    Transmitting ports through ports<\/h2>\n\n\n\n

    Since this release transmitting ports is possible. This means that we can send ports through ports. In fact, we can send ports that may send ports that may send ports, etc. But don’t be fooled by the apparent complexity. The inner type T<\/code> of a port like in<T><\/code> simply states that that is the message type. Should the type T<\/code> contain one or more ports, then we kick off a bit of code that takes care of the transfer of the port. Should the port inside of T<\/code> itself, after being received, send a port, then we simply kick off that same procedure again.

    In the simplest case, we have someone transmitting the receiving end of a channel to another component, which then uses that receiving end to receive a value. The example below shows this:<\/p>\n\n\n\n

    comp<\/strong> port_sender(out<\/strong><in<\/strong><u32<\/strong>>> tx, in<\/strong><u32> to_transmit) {\n    sync<\/strong> put<\/strong>(tx, to_transmit);\n}\n\ncomp<\/strong> port_receiver_and_value_getter(in<\/strong><in<\/strong><u32<\/strong>>> rx, u32<\/strong> expected_value) {\n    u32<\/strong> got_value = 0;\n    sync<\/strong> {\n        auto<\/strong> port = get<\/strong>(rx);\n        got_value = get<\/strong>(port);\n    }\n    if<\/strong> (expected_value == got_value) {\n        print(\"got the expected value :)\");\n    } else<\/strong> {\n        print(\"got a different value :(\");\n    }\n}\n\ncomp<\/strong> value_sender(out<\/strong><u32<\/strong>> tx, u32<\/strong> to_send) {\n    sync<\/strong> put<\/strong>(tx, to_send);\n}\n\ncomp<\/strong> main() {\n    u32<\/strong> value = 1337_2392;\n\n    channel<\/strong> port_tx -> port_rx;\n    channel<\/strong> value_tx -> value_rx;\n    new<\/strong> port_sender(port_tx, value_rx);\n    new<\/strong> port_receiver_and_value_getter(port_rx, value);\n    new<\/strong> value_sender(value_tx, value);\n}<\/pre>\n\n\n\n

    Of course we may do something a little more complicated than this. Suppose that we don’t just send one port, but send a series of ports. i.e. we use an Option<\/code> union type, to turn an array of ports that we’re going to transmit into a series of messages containing ports, each sent to a specific component.<\/p>\n\n\n\n

    union<\/strong> Option<T> {\n    Some(T),\n    None,\n}\n\ncomp<\/strong> port_sender(out<\/strong><Option<in<\/strong><u32<\/strong>>>>[] txs, in<\/strong><u32<\/strong>>[] to_transmit) {\n    auto<\/strong> num_peers = length<\/strong>(txs);\n    auto<\/strong> num_ports = length<\/strong>(to_transmit);\n\n    auto<\/strong> num_per_peer = num_ports \/ num_peers;\n    auto<\/strong> num_remaining = num_ports - (num_per_peer * num_peers);\n\n    auto<\/strong> peer_index = 0;\n    auto<\/strong> port_index = 0;\n    while<\/strong> (peer_index < num_peers) {\n        auto<\/strong> peer_port = txs[peer_index];\n        auto<\/strong> counter = 0;\n\n        \/\/ Distribute part of the ports to one of the peers.\n        sync<\/strong> {\n            \/\/ Sending the main batch of ports for the peer\n            while<\/strong> (counter < num_per_peer) {\n                put<\/strong>(peer_port, Option::Some(to_transmit[port_index]));\n                port_index += 1;\n                counter += 1;\n            }\n\n            \/\/ Sending the remainder of ports, one per peer until they're gone\n            if<\/strong> (num_remaining > 0) {\n                put<\/strong>(peer_port, Option::Some(to_transmit[port_index]));\n                port_index += 1;\n                num_remaining -= 1;\n            }\n\n            \/\/ Finish the custom protocol by sending nothing, which indicates to\n            \/\/ the peer that it has received all the ports we have to hand out.\n            put<\/strong>(peer_port, Option::None);\n        }\n\n        peer_index += 1;\n    }\n}<\/pre>\n\n\n\n

    And here we have the component which will receive on that port. We can design the synchronous regions any we want. In this case when we receive ports we just synchronize port_sender<\/code>, but the moment we receive messages we synchronize with everyone.<\/p>\n\n\n\n

    comp<\/strong> port_receiver(in<\/strong><Option<in<\/strong><u32<\/strong>>>> port_rxs, out<\/strong><u32<\/strong>> sum_tx) {\n    \/\/ Receive all ports\n    auto<\/strong> value_rxs = {};\n\n    sync<\/strong> {\n        while<\/strong> (true<\/strong>) {\n            auto<\/strong> maybe_port = get<\/strong>(port_rxs);\n            if<\/strong> (let<\/strong> Option::Some(certainly_a_port) = maybe_port) {\n                value_rxs @= { certainly_a_port };\n            } else<\/strong> {\n                break<\/strong>;\n            }\n        }\n    }\n\n    \/\/ Receive all values\n    auto<\/strong> received_sum = 0;\n\n    sync<\/strong> {\n        auto<\/strong> port_index = 0;\n        auto<\/strong> num_ports = length<\/strong>(value_rxs);\n        while<\/strong> (port_index < num_ports) {\n            auto<\/strong> value = get<\/strong>(value_rxs[port_index]);\n            received_sum += value;\n            port_index += 1;\n        }\n    }\n\n    \/\/ And send the sum\n    sync<\/strong> put<\/strong>(sum_tx, received_sum);\n}<\/pre>\n\n\n\n

    Now we need something to send the values, we’ll make something incredibly simple. Namely:<\/p>\n\n\n\n

    comp<\/strong> value_sender(out<\/strong><u32<\/strong>> tx, u32<\/strong> value_to_send) {\n    sync<\/strong> put<\/strong>(tx, value_to_send);\n}\n\ncomp<\/strong> sum_collector(in<\/strong><u32<\/strong>>[] partial_sum_rx, out<\/strong><u32<\/strong>> total_sum_tx) {\n    auto<\/strong> sum = 0;\n    auto<\/strong> index = 0;\n    while<\/strong> (index < length<\/strong>(partial_sum_rx)) {\n        sync<\/strong> sum += get<\/strong>(partial_sum_rx[index]);\n        index += 1;\n    }\n    sync<\/strong> put<\/strong>(total_sum_tx, sum);\n}<\/pre>\n\n\n\n

    And we need the component to set this entire system of components up. So we write the following entry point.<\/p>\n\n\n\n

    comp<\/strong> main() {\n    auto<\/strong> num_value_ports = 32;\n    auto<\/strong> num_receivers = 3;\n\n    \/\/ Construct the senders of values\n    auto<\/strong> value_port_index = 1;\n    auto<\/strong> value_rx_ports = {};\n    while<\/strong> (value_port_index <= num_value_ports) {\n        channel<\/strong> value_tx -> value_rx;\n        new<\/strong> value_sender(value_tx, value_port_index);\n        value_rx_ports @= { value_rx };\n        value_port_index += 1;\n    }\n\n    \/\/ Construct the components that will receive groups of value-receiving\n    \/\/ ports\n    auto<\/strong> receiver_index = 0;\n    auto<\/strong> sum_combine_rx_ports = {};\n    auto<\/strong> port_tx_ports = {};\n\n    while<\/strong> (receiver_index < num_receivers) {\n        channel<\/strong> sum_tx -> sum_rx;\n        channel<\/strong> port_tx -> port_rx;\n        new<\/strong> port_receiver(port_rx, sum_tx);\n\n        sum_combine_rx_ports @= { sum_rx };\n        port_tx_ports @= { port_tx };\n        receiver_index += 1;\n    }\n\n    \/\/ Construct the component that redistributes the total number of input\n    \/\/ ports.\n    new<\/strong> port_sender(port_tx_ports, value_rx_ports);\n\n    \/\/ Construct the component that computes the sum of all sent values\n    channel<\/strong> total_value_tx -> total_value_rx;\n    new<\/strong> sum_collector(sum_combine_rx_ports, total_value_tx);\n\n    auto<\/strong> expected = num_value_ports * (num_value_ports + 1) \/ 2;\n    auto<\/strong> received = 0;\n\n    sync<\/strong> received = get<\/strong>(total_value_rx);\n\n    if<\/strong> (expected == received) {\n        print(\"got the expected value!\");\n    } else<\/strong> {\n        print(\"got something entirely different\");\n    }\n}<\/pre>\n\n\n\n

    Native TCP components<\/h2>\n\n\n\n

    Also new in this release are native components. Native components are provided by the underlying implementation of Reowolf and allow protocols to be built on top of other protocols. We demonstrate this by introducing native components for the Transmission Control Protocol (TCP). Hence, Reowolf can now be used to express protocols that assume an underlying implementation of TCP.<\/p>\n\n\n\n

    We’ll start by important the standard library that defines the builtin components that support a TCP listener and a TCP client. We’ll define a little utility function (listen_port<\/code>) that is used through this example that is called to retrieve the port we’re going to listen on.<\/p>\n\n\n\n

    import<\/strong> std.internet::*;\n\nfunc<\/strong> listen_port() -> u16<\/strong> {\n    return<\/strong> 2392;\n}<\/pre>\n\n\n\n

    Next we define our server. The server accepts (for the case of this example) a number of connections until it will stop listening. At that point it will wait until it receives a signal that allows it to shut down.<\/p>\n\n\n\n

    comp<\/strong> server(u32<\/strong> num_connections, in<\/strong><()> shutdown) {\n    \/\/ Here we set up the channels for commands, going to the listener\n    \/\/ component, and the channel that sends new connections back to us.\n    channel<\/strong> listen_cmd_tx -> listen_cmd_rx;\n    channel<\/strong> listen_conn_tx -> listen_conn_rx;\n\n    \/\/ And we create the tcp_listener, imported from the standard library, here.\n    new<\/strong> tcp_listener({}, listen_port(), listen_cmd_rx, listen_conn_tx);\n\n    \/\/ Here we set up a variable that will hold our received connections\n    channel<\/strong> client_cmd_tx -> unused_client_cmd_rx;\n    channel<\/strong> unused_client_data_tx -> client_data_rx;\n    auto<\/strong> new_connection = TcpConnection{\n        tx: client_cmd_tx,\n        rx: client_data_rx,\n    };\n\n    auto<\/strong> connection_counter = 0;\n    while<\/strong> (connection_counter < num_connections) {\n        \/\/ We wait until we receive a new connection\n        sync<\/strong> {\n            \/\/ The way the standard library is currently written, we need to\n            \/\/ send the `tcp_listener` component the command that it should\n            \/\/ listen to for the next connection. This is only one way in which\n            \/\/ the standard library could be written. We could also write it\n            \/\/ such a way such that a separate component buffers new incoming\n            \/\/ connections, such that we only have to `get` from that separate\n            \/\/ component.\n            \/\/\n            \/\/ Note that when we get such a new connection, (see the\n            \/\/ TcpConnection struct in the standard library), the peers of the\n            \/\/ two ports are already hooked up to a `tcp_client` component, also\n            \/\/ defined in the standard library.\n            put<\/strong>(listen_cmd_tx, ListenerCmd::Accept);\n            new_connection = get<\/strong>(listen_conn_rx);\n        }\n\n        \/\/ In any case, now that the code is here, the synchronous round that\n        \/\/ governed receiving the new connection has completed. And so we send\n        \/\/ that connection off to a handler component. In this case we have the\n        \/\/ `echo_machine` component, defined in this file as well.\n        new<\/strong> echo_machine(new_connection);\n        connection_counter += 1;\n    }\n\n    \/\/ When all of the desired connections have been handled, we first await a\n    \/\/ shutdown signal from another component.\n    sync<\/strong> auto<\/strong> v = get<\/strong>(shutdown);\n\n    \/\/ And once we have received that signal, we'll instruct the listener\n    \/\/ component to shut down.\n    sync<\/strong> put<\/strong>(listen_cmd_tx, ListenerCmd::Shutdown);\n}<\/pre>\n\n\n\n

    The following piece of code represents the component that is spawned by the server component to handle new connections. All it does is wait for a single incoming TCP packet, where it expects a single byte of data, and then echo that back to the peer.<\/p>\n\n\n\n

    comp<\/strong> echo_machine(TcpConnection conn) {\n    auto<\/strong> data_to_echo = {};\n\n    \/\/ Here is where we receive a message from a peer ...\n    sync<\/strong> {\n        put<\/strong>(conn.tx, ClientCmd::Receive);\n        data_to_echo = get<\/strong>(conn.rx);\n        put<\/strong>(conn.tx, ClientCmd::Finish);\n    }\n\n    \/\/ ... and send it right back to our peer.\n    sync<\/strong> put<\/strong>(conn.tx, ClientCmd::Send(data_to_echo));\n\n    \/\/ And we ask the `tcp_client` to shut down neatly.\n    sync<\/strong> put<\/strong>(conn.tx, ClientCmd::Shutdown);\n}\n\n\/\/ Here is the component that we will instantiate to connect to the `server`\n\/\/ component above (more specifically, to the `tcp_listener` component\n\/\/ instantiated by the `server`). This is the component that will ask the\n\/\/ `echo_machine` component to echo a byte of data.\n\ncomp<\/strong> echo_requester(u8<\/strong> byte_to_send, out<\/strong><()> done) {\n    \/\/ We instantiate the `tcp_client` from the standard library. This will\n    \/\/ perform the \"connect\" call to the `tcp_listener`.\n    channel<\/strong> cmd_tx -> cmd_rx;\n    channel<\/strong> data_tx -> data_rx;\n    new<\/strong> tcp_client({127, 0, 0, 1}, listen_port(), cmd_rx, data_tx);\n\n    \/\/ And once we are connected, we send the single byte to the other side.\n    sync<\/strong> put<\/strong>(cmd_tx, ClientCmd::Send({ byte_to_send }));\n\n    \/\/ This sent byte will arrive at the `echo_machine`, which will send it\n    \/\/ right back to us. So here is where we wait for that byte to arrive.\n    auto<\/strong> received_byte = byte_to_send + 1;\n    sync<\/strong> {\n        put<\/strong>(cmd_tx, ClientCmd::Receive);\n        received_byte = get<\/strong>(data_rx)[0];\n        put<\/strong>(cmd_tx, ClientCmd::Finish);\n    }\n\n    \/\/ We make sure that we got back what we sent\n    if<\/strong> (byte_to_send != received_byte) {\n        crash();\n    }\n\n    \/\/ And we shut down the TCP connection\n    sync<\/strong> put<\/strong>(cmd_tx, ClientCmd::Shutdown);\n\n    \/\/ And finally we send a signal to another component (the `main` component)\n    \/\/ to let it know we have finished our little protocol.\n    sync<\/strong> put<\/strong>(done, ());\n}<\/pre>\n\n\n\n

    And here the entry point for our program:<\/p>\n\n\n\n

    comp<\/strong> main() {\n    \/\/ Some settings for the example\n    auto<\/strong> num_connections = 12;\n\n    \/\/ We create a new channel that allows us to shut down our server component.\n    \/\/ That channel being created, we can instantiate the server component.\n    channel<\/strong> shutdown_listener_tx -> shutdown_listener_rx;\n    new<\/strong> server(num_connections, shutdown_listener_rx);\n\n    \/\/ Here we create all the requesters that will ask their peer to echo back\n    \/\/ a particular byte.\n    auto<\/strong> connection_index = 0;\n    auto<\/strong> all_done = {};\n    while<\/strong> (connection_index < num_connections) {\n        channel<\/strong> done_tx -> done_rx;\n        new<\/strong> echo_requester(cast<\/strong>(connection_index), done_tx);\n        connection_index += 1;\n        all_done @= {done_rx};\n    }\n\n    \/\/ Here our program starts to shut down. First we'll wait until all of our\n    \/\/ requesting components have gotten back the byte they're expecting.\n    auto<\/strong> counter = 0;\n    while<\/strong> (counter < length(all_done)) {\n        sync<\/strong> auto<\/strong> v = get<\/strong>(all_done[counter]);\n        counter += 1;\n    }\n\n    \/\/ And we shut down our server.\n    sync<\/strong> put<\/strong>(shutdown_listener_tx, ());\n}<\/pre>\n\n\n\n

    Project documentation<\/h2>\n\n\n\n

    Detailed documentation has been provided, providing users and developers background information about the current implementation of Reowolf 2.<\/p>\n\n\n\n