Model update
Model update
In this task, a model for recognition is updated by contributors defined as model_engineer
.
Assets
For this procedure, two sets of assets are required. One for the model_creator
and one for model_contributor
. Possible examples for these assets can be found in substrate-client-decentralml/assets
.
Here's an example of the code and assets for the model_creator
:
model_creator
├── __init__.py
├── model_creator.py
├── setup.sh
├── requirements.txt
└── settings.py
model_creator.py
contains the python code for generating the first model, saving it and federated the results once the contributors have completed their training.setup.sh
is a script to setup the development environment for themodel_creator
requirements.txt
lists thepython
requirements for the model developementsettings.py
is just a support file for specifiying the model parameters for themodel_contributor
and the creator.
The model_creator
must also create the python code for the contributor to perform his task:
model_engineer
├── __init__.py
├── model_engineer.py
├── setup.sh
├── requirements.txt
└── settings.py
model_contributor.py
contains the python code for the training of the modelrequirements.txt
lists thepython
requirements for the model developementsettings.py
is just a support file for specifiying the model parameters for themodel_contributor
and the creator.start_task.sh
is a script for themodel_contributor
to actually execute the task
Procedure
The
model_creator
starts by creating a model structure and compiles it:# assets/model_creator/model_creator.py def create_model(): model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10) ]) model.compile( optimizer=tf.keras.optimizers.Adam(0.001), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], ) return model
Once the first model is generated, it can be trained:
# assets/model_creator/model_creator.py def train_model(model, x_train, y_train, epochs=1): model.fit(x_train, y_train, epochs) return model
Evaluated:
#assets/model_creator/model_creator.py def evaluate_model(model, x_test, y_test): return model.evaluate(x_test, y_test)
And more importantly, saved for the model contributors to use:
#assets/model_creator/model_creator.py def save_model(model, output_path, model_name): model.save(f"{output_path}/{model_name}")
In a federated system, the training step is generally delegated to the model contributors, but the model creator could perform some training just to initiate the system. The contributors can then subsequently training on new data.
Note all these steps are part of the
model_creator.py
in the assets folder.The
model_creator
can then delegate the subsequent update to the model structure to themodel_engineer
contributors by creating a corresponding task:#decentralml/create_task.py def create_task_model_engineer(expiration_block, substrate, sudoaccount, passphrase, task_type, question, pays_amount, max_assignments, validation_strategy, model_engineer_path, model_engineer_storage_type, model_engineer_storage_credentials)
In which:
model_engineer_path
is the path to the assets for themodel_contributor
. The creation of the task also upload the corresponding asset to a remote/shared storage.
For additional info on the substrate parameters (i.e. expiration block, substrate, etc.) consult the documentation of the python client or view the example (https://github.com/livetreetech/DecentralML/blob/main/substrate-client-decentralml/src/decentralml/create_task.py).
The
model_engineer
then canlist_task
(see Listing objects) and accept a task with:#decentralml/assign_task.py def assign_task(substrate, sudoaccount, passphrase, task_id)
by specifying the
task_id
. Assigning a task will download the corresponding assets for model contributor task.The
model_engineer
can then setup its development environment using thesetup.sh
script../setup.sh
The contributor can then start working on redefining and updating the structure of the model provided by the creator.
In order to do so, the
model_engineer
can use theredefine_model
function provided in this example:#assets/model_engineer/model_engineer.py def redefine_model(): model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(10) ]) model.compile( optimizer=tf.keras.optimizers.Adam(0.001), loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=[tf.keras.metrics.SparseCategoricalAccuracy()], ) return model
Once the
model_engineer
has updated the model structure, he can train and evaluate the new model:#assets/model_engineer/model_engineer.py def train_model(model, x_train, y_train, epoch=100, batch_size=32): model.fit(x_train, y_train, epochs=epoch, batch_size=batch_size) return model
#assets/model_engineer/model_engineer.py def evaluate_model(model, x_test, y_test): return model.evaluate(x_test, y_test)
Finally, once the engineer is satisfied, he can save the model for sending it back as result of the task:
#assets/model_engineer/model_engineer.py def save_model(model, output_path, model_name, contributo_id_length=10): contributor_id = get_random_string(contributo_id_length) model.save(f"{output_path}/{model_name}_{contributor_id}")
Saving the model creates an output folder which includes the structure and the weights. The name of the folder includes the model name and an id for the contributor:
example_model_hirabujcbw/ ├── assets ├── fingerprint.pb ├── keras_metadata.pb ├── saved_model.pb └── variables ├── variables.data-00000-of-00001 └── variables.index
The contributor ID in this example is a randomly generated string used to uniquely identify the different models generated by the contributors as part of the training.
Once the
model_engineer
has completed his task, he can send the results using:#decentralml/send_task_result.py def send_task_result(substrate, keypair, submission_id, result, result_path, result_storage_type, result_storage_credentials)
This function accepts a parameter
result_path
which will have to be set to the output folder containing the saved model. Sending the results uploads the model training results to a remote and/or shared storage.The
model_creator
can list the available results for each task using thelist_task_results
(see Listing objects).Once, a result is available, the
model_creator
can start validating the results using thevalidate_task_results
. The validation of the results can be performed according to three policies:AutoAccept: the results are automatically accepted
ManualAccept: the
model_creator
manually accepts each task resultsCustomAccept: the
model_creator
can implement custom methods for automatically validating the results.
Starting the validation process downloads the results and the corresponding saved models. In this example, we explain a manual validation process. The
model_creator
can validate process by loading the federated models. For this example, the functions to federate the models are included in theassets/model_creator/model_creator.py
.#assets/model_creator/model_creator.py def load_model(model_path, model_name): model = tf.keras.models.load_model(f"{model_path}/{model_name}") print(model.summary()) return model
This model can then be evaluated:
#assets/model_creator/model_creator.py def evaluate_model(model, x_test, y_test): return model.evaluate(x_test, y_test)
Once the validation process is complete, the
model_creator
or the automatic validation procedure can either accept or reject the results, using respectivelyaccept_task_results()
orreject_task_results()
.Accepting the results issues the payment to the contributor.
Last updated