syft.frameworks.torch.hook.hook

Module Contents

Classes

TorchHook

A Hook which Overrides Methods on PyTorch Tensors.

class syft.frameworks.torch.hook.hook.TorchHook(torch, local_worker: BaseWorker = None, is_client: bool = True, verbose: bool = False, seed=None)

Bases: syft.generic.frameworks.hook.hook.FrameworkHook

A Hook which Overrides Methods on PyTorch Tensors.

The purpose of this class is to:
  • extend torch methods to allow for the moving of tensors from one

worker to another. * override torch methods to execute commands on one worker that are called on tensors controlled by the local worker.

This class is typically the first thing you will initialize when using PySyft with PyTorch because it is responsible for augmenting PyTorch with PySyft’s added functionality (such as remote execution).

Parameters
  • local_worker – An optional BaseWorker instance that lets you provide a local worker as a parameter which TorchHook will assume to be the worker owned by the local machine. If you leave it empty, TorchClient will automatically initialize a workers.VirtualWorker under the assumption you’re looking to do local experimentation or development.

  • is_client – An optional boolean parameter (default True), indicating whether TorchHook is being initialized as an end-user client.This can impact whether or not variables are deleted when they fall out of scope. If you set this incorrectly on a end user client, Tensors and Variables will never be deleted. If you set this incorrectly on a remote machine (not a client), tensors will not get saved. It’s really only important if you’re not initializing the local worker yourself.

  • verbose – An optional boolean parameter (default True) to indicate whether or not to print the operations as they occur.

  • queue_size – An integer optional parameter (default 0) to specify the max length of the list that stores the messages to be sent.

Example

>>> import torch as th
>>> import syft as sy
>>> hook = sy.TorchHook(th)
Hooking into Torch...
Overloading Complete.
# constructing a normal torch tensor in pysyft
>>> x = th.Tensor([-2,-1,0,1,2,3])
>>> x
-2
-1
0
1
2
3
[syft.core.frameworks.torch.tensor.FloatTensor of size 6]
framework

In Syft there is a syft.framework value that can contain only one framework. Ideally it should contain a list of supported frameworks.

We do this because in Plans there is method to reduce the number of actions that are traced (and then sent). The actions that are not returning a result, changing a placeholder, inplace or changing the global state are eliminated from the traced list

create_shape(cls, shape_dims)

Factory method for creating a generic FrameworkShape.

create_wrapper(cls, wrapper_type)

Factory method for creating a generic wrapper of type wrapper_type.

create_zeros(cls, *shape, dtype=None, **kwargs)

Factory method for creating a generic zero FrameworkTensor.

_hook_native_tensor(self, tensor_type: type, syft_type: type)

Adds PySyft Tensor Functionality to the given native tensor type.

Overloads the given native Torch tensor to add PySyft Tensor Functionality. Overloading involves modifying the tensor type with PySyft’s added functionality. You may read about what kind of modifications are made in the methods that this method calls.

Parameters
  • tensor_type – The type of tensor being hooked (in this refactor this is only ever torch.Tensor, but in previous versions of PySyft this iterated over all tensor types.

  • syft_type – The abstract type whose methods should all be added to the tensor_type class. In practice this is always TorchTensor. Read more about it there.

__hook_properties(self, tensor_type)
_hook_syft_tensor_methods(self, syft_type: type)

Add hooked version of all methods of to_auto_overload[tensor_type] to the syft_type, so that they act like regular tensors in terms of functionality, but instead of performing the native tensor method, it will be forwarded to each share when it is relevant

Parameters
  • tensor_type – The tensor type to which we are adding methods.

  • syft_type – the syft_type which holds the methods

_hook_private_tensor_methods(self, syft_type: type)

Add hooked version of all methods of the tensor_type to the Private Tensor: It’ll add references to its parents and save command/actions history.

_hook_worker_methods(self)
_get_hooked_base_worker_method(hook_self, attr)
_hook_additive_shared_tensor_methods(self)

Add hooked version of all methods of the torch Tensor to the Additive Shared tensor: instead of performing the native tensor method, it will be forwarded to each share when it is relevant

_hook_parameters(self)

This method overrides the torch Parameter class such that it works correctly with our overridden tensor types. The native torch Parameter class kept deleting all of our attributes on our custom tensors, so we wrote our own.

_hook_torch_module(self)

Overloads functions in the main torch modules. The way this is accomplished is by first moving all existing module functions in the torch module to native_<function_name_here>.

Example

the real torch.cat() will become torch.native_cat() and torch.cat() will have our hooking code.

_get_hooked_additive_shared_method(hook_self, attr)

Hook a method to send it multiple remote workers

Parameters

attr (str) – the method to hook

Returns

the hooked method

_hook_tensor(hook_self)

Hooks the function torch.tensor() We need to do this seperately from hooking the class because internally torch does not pick up the change to add the args :param hook_self: the hook itself

classmethod _transfer_methods_to_native_tensor(cls, tensor_type: type, syft_type: type)

Adds methods from the TorchTensor class to the native torch tensor.

The class TorchTensor is a proxy to avoid extending directly the torch tensor class.

Parameters

tensor_type – The tensor type to which we are adding methods from TorchTensor class.

_hook_module(self)

Overloading torch.nn.Module with PySyft functionality, the primary module responsible for core ML functionality such as Neural network layers and loss functions.

It is important to note that all the operations are actually in-place.

_hook_optim(self)

Overloading torch.optim.Optimizer with PySyft functionality. Optimizer hyper-parameters should indeed be converted to fixed precision to interact with fixed precision or additive shared tensors.

It is important to note that all the operations are actually in-place.

set_verbose(self, flag)