Advanced usage¶
Go to:
Notebook configuration¶
[1]:
import sys
import numpy as np
import cnnclustering
from cnnclustering import cluster, hooks
from cnnclustering import _types, _fit
Print Python and package version information:
[2]:
# Version information
print("Python: ", *sys.version.split("\n"))
print("Packages:")
for package in [np, cnnclustering]:
print(f" {package.__name__}: {package.__version__}")
Python: 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27) [GCC 9.3.0]
Packages:
numpy: 1.20.2
cnnclustering: 0.4.3
Clustering initialisation¶
Short initialisation for point coordinates¶
In the Basic usage tutorial, we saw how to create a Clustering
object from a list of point coordinates.
[3]:
# Three dummy points in three dimensions
points = [
[0, 0, 0],
[1, 1, 1],
[2, 2, 2]
]
clustering = cluster.Clustering(points)
The created clustering
object is now ready to execute a clustering on the provided input data. In fact, this default initialisation works in the same way with any Python sequence of sequences.
[4]:
# Ten random points in four dimensions
points = np.random.random((10, 4))
clustering = cluster.Clustering(points)
Please note, that this does only yield meaningful results if the input data does indeed contain point coordinates. When a Clustering
is initialised like this, quite a few steps are carried out in the background to ensure the correct assembly of the object. To be specific, the following things are taken care of:
The raw input data (here
points
) is wrapped into a generic input data object (a concrete implementation of the abstract class_types.InputData
)A generic fitter object (a concrete implementation of the abstract class
_fit.Fitter
) is selected and associated with the clusteringThe fitter is equipped with other necessary building blocks
In consequence, the created clustering
object carries a set of other objects that control how a clustering of the input data is executed.
[5]:
print(clustering)
Clustering(input_data=InputDataExtComponentsMemoryview, fitter=FitterExtBFS(ngetter=NeighboursGetterExtBruteForce(dgetter=DistanceGetterExtMetric(metric=MetricExtEuclideanReduced), sorted=False, selfcounting=True), na=NeighboursExtVectorCPPUnorderedSet, nb=NeighboursExtVectorCPPUnorderedSet, checker=SimilarityCheckerExtSwitchContains, queue=QueueExtFIFOQueue), predictor=None)
To understand the setup steps and the different kinds of partaking objects better, lets have a closer look at the default constructor for the Clustering
class in the next section.
Manual custom initialisation¶
The init method of the Clustering
class has the following signature:
[6]:
print(cluster.Clustering.__init__.__doc__, end="\n\n")
Clustering.__init__(self, input_data=None, fitter=None, predictor=None, labels=None, unicode alias: str = u'root', parent=None, **kwargs)
A Clustering
does optionally accept a value for the input_data
and the fitter
keyword argument (let’s ignore the others for now). A plain instance of the class can be created just like this:
[7]:
plain_clustering = cluster.Clustering()
print(plain_clustering)
Clustering(input_data=None, fitter=None, predictor=None)
Naturally, this object is not set up for an actual clustering.
[8]:
plain_clustering.fit(0.1, 2)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-33a427bb7428> in <module>
----> 1 plain_clustering.fit(0.1, 2)
src/cnnclustering/cluster.pyx in cnnclustering.cluster.Clustering.fit()
AttributeError: 'NoneType' object has no attribute 'n_points'
Starting from scratch, we need to provide some input data and associate it with the clustering. Trying to use just raw input data for this, however, will result in an error:
[9]:
plain_clustering.input_data = points
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-9-74ffe4d30bb9> in <module>
----> 1 plain_clustering.input_data = points
src/cnnclustering/cluster.pyx in cnnclustering.cluster.Clustering.input_data()
TypeError: Can't use object of type ndarray as input data. Expected type InputData.
Info: If you know what you are doing, you can still associate arbitrary input data to a clustering by assigning to Clustering._input_data
directly.
We need to provide a valid input data object instead. The recommended type for point coordinates that can be constructed from a 2D NumPy array is _types.InputDataExtComponentsMemoryview
.
[10]:
plain_clustering.input_data = _types.InputDataExtComponentsMemoryview(points)
This input data type is used to wrap the raw data and allows generic access to it which is needed for the clustering. For more information on what exactly has to be implemented by a valid input data type, see the Demonstration of (generic) interfaces tutorial. We could have chosen to pass a valid input data type to a Clustering
directly on initialisation:
[11]:
print(
cluster.Clustering(
_types.InputDataExtComponentsMemoryview(points)
)
)
Clustering(input_data=InputDataExtComponentsMemoryview, fitter=None, predictor=None)
As you see, this initialisation creates a Clustering
that carries the input data wrapped in a suitable type, but nothing else. This is different from the starting example where we passed raw data on initialisation which triggered the assembly of a bunch of other objects.
So we are not done yet and clustering is not possible because we are still missing a fitter that controls how the clustering should be actually done.
[12]:
plain_clustering.fit(0.1, 2)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-12-33a427bb7428> in <module>
----> 1 plain_clustering.fit(0.1, 2)
src/cnnclustering/cluster.pyx in cnnclustering.cluster.Clustering.fit()
AttributeError: 'NoneType' object has no attribute 'fit'
The default fitter for any common-nearest-neighbours clustering is _fit.FitterExtBFS
. If we want to initialise this fitter, we additionally need to provide the following building blocks that we need to pass as the following arguments:
neighbours_getter
: A generic object that defines how neighbourhood information can be retrieved from the input data object. Needs to be a concrete implementation of the abstract class_types.NeighboursGetter
.neighbours
: A generic object to hold the retrieved neighbourhood of one point. Filled by theneighbours_getter
. Needs to be a concrete implementation of the abstract class_types.Neighbours
.neighbour_neighbours
: Asneighbours
. This fitter uses exactly two containers to store the neighbourhoods of two points.similarity_checker
: A generic object that controls how the common-nearest-neighbours similarity criterion (at least c common neighbours) is checked. Needs to be a concrete implementation of the abstract class_types.SimilarityChecker
.queue
: A generic queuing structure needed for the breadth-first-search approach implemented by the fitter. Needs to be a concrete implementation of the abstract class_types.Queue
.
So let’s create these building blocks to prepare a fitter for the clustering. Note, that the by default recommended neighbours getter (_types.NeighboursGetterExtBruteForce
) does in turn require a distance getter (that controls how pairwise distances for points in the input data are retrieved) which again expects us to define a metric. For the neighbours containers we choose a type that wraps a C++ vector. The similarity check will be done by a set of containment checks and the queuing
structure will be a C++ queue.
[13]:
# Choose Euclidean metric
metric = _types.MetricExtEuclidean()
distance_getter = _types.DistanceGetterExtMetric(metric)
# Make neighbours getter
neighbours_getter = _types.NeighboursGetterExtBruteForce(
distance_getter
)
# Make fitter
fitter = _fit.FitterExtBFS(
neighbours_getter,
_types.NeighboursExtVector(),
_types.NeighboursExtVector(),
_types.SimilarityCheckerExtContains(),
_types.QueueExtFIFOQueue()
)
This fitter can now be associated with our clustering. With everything in place, a clustering can be finally executed.
[14]:
plain_clustering.fitter = fitter
[15]:
plain_clustering.fit(0.1, 2)
-----------------------------------------------------------------------------------------------
#points r c min max #clusters %largest %noise time
10 0.100 2 None None 0 0.000 1.000 00:00:0.000
-----------------------------------------------------------------------------------------------
The described manual way to initialise a Clustering
instance is very flexible as the user can cherry pick exactly the desired types to modify the different contributing pieces. On the other hand, this approach can be fairly tedious and error prone. In the next section we will see how we solved this problem by facilitating the aggregation of a clustering according to pre-defined schemes.
Initialisation via a builder¶
We did see so far how to assemble a Clustering
instance from scratch by selecting the individual clustering components manually. In the beginning we did also see that we could create a Clustering
seemingly automatically if we just pass raw data to the constructor. To fill the gap, let’s now have a look at how a Clustering
can be created via a builder. A builder is a helper object that serves the purpose of correctly creating a Clustering
based on some preset requirements, a so
called recipe. When we try to initialise a Clustering
with raw input data (that is not wrapped in a valid generic input data type), a ClusteringBuilder
instance actually tries to take over behind the scenes.
The ClusteringBuilder
class has the following initialisation signature:
[16]:
print(cluster.ClusteringBuilder.__init__.__doc__)
ClusteringBuilder.__init__(self, data, preparation_hook=None, registered_recipe_key=None, clustering_type=None, alias=None, parent=None, **recipe)
It requires some raw input data as the first argument. Apart from that one can use these optional keyword arguments to modify its behaviour:
preparation_hook
: A function that accepts raw input data, does some optional preprocessing and returns data suitable for the initialisation of a generic input data type plus a dictionary containing meta information. IfNone
, the current default for this ishooks.prepare_points_from_parts
which prepares point coordinates for any input data type accepting a 2D NumPy array. If no processing of the raw input data is desired, usehooks.prepare_pass
. The default preparation hook can be set via the class attribute_default_preparation_hook
(must be a staticmethod).registered_recipe_key
: A string identifying a pre-defined clustering building block recipe. If this isNone
, the current default is"coordinates"
which can be overridden by the class attribute_default_recipe_key
. The recipe key is passed tohooks.get_registered_recipe
to retrieve the actual recipe. The key"none"
provides an empty recipe.clustering_type
: The type of clustering to create. The current default (and the only available option out of the box) isClustering
and can be overridden via the class attribute_default_clustering
. This allows the use of the builder for the creation of other clusterings, e.g. for subclasses ofClustering
.alias
/parent
: Are directly passed to the created clustering.**recipe
: Other keyword arguments are interpreted as modifications to the default recipe (retrieved byregistered_recipe_key
).
To start with the examination of these options, we should look into what is actually meant by a clustering recipe. A recipe is basically a nested mapping of clustering component strings (matching the corresponding keyword arguments used on clustering/component initialisation, e.g. "input_data"
or "neighbours"
) to the generic types (classes not instances) that should be use in the corresponding place. A recipe could for example look like this:
[17]:
recipe = {
"input_data": _types.InputDataExtComponentsMemoryview,
"fitter": "bfs",
"fitter.getter": "brute_force",
"fitter.getter.dgetter": "metric",
"fitter.getter.dgetter.metric": "euclidean",
"fitter.na": ("vector", (1000,), {}),
"fitter.checker": "contains",
"fitter.queue": "fifo"
}
In this recipe, the generic type supposed to wrap the input data is specified explicitly as the class object. Alternatively, strings can be used to specify a type in shorthand notation. Which abbreviations are understood is defined in the hooks.COMPONENT_NAME_TYPE_MAP
. In the fitter case, bfs
is translated into _fit.FitterExtBFS
. Dot notation is used to indicate nested dependencies, e.g. to define components needed to create other components. Similarly, shorthand notation is
supported for the component key, as shown with fitter.getter
which stands in for the neighbours getter required by the fitter. Abbreviations on the key side are defined in hooks.COMPONENT_ALT_KW_MAP
. For the "fitter.na"
component (one of the neighbours container type needed that the fitter needs), we have a tuple as the value in the mapping. This is interpreted as a component string identifier, followed by an arguments tuple, and a keyword arguments dictionary used in the
initialisation of the corresponding component. Note also, that the recipe defines only "fitter.na"
(neighbours
) and not "fitter.nb"
(neighbour_neighbours
) in which case the same type will be used for both components. Those fallback relation ships are defined in hooks.COMPONENT_KW_TYPE_ALIAS_MAP
.
This recipe can be now passed to a builder. Calling the build
method of a builder will create and return a Clustering
:
[18]:
print(
cluster.ClusteringBuilder(
points,
preparation_hook=hooks.prepare_pass,
registered_recipe_key="none",
**recipe
).build()
)
Clustering(input_data=InputDataExtComponentsMemoryview, fitter=FitterExtBFS(ngetter=NeighboursGetterExtBruteForce(dgetter=DistanceGetterExtMetric(metric=MetricExtEuclidean), sorted=False, selfcounting=True), na=NeighboursExtVector, nb=NeighboursExtVector, checker=SimilarityCheckerExtContains, queue=QueueExtFIFOQueue), predictor=None)
For the initial example of using point coordinates in a sequence of sequences, the builder part is equivalent to:
[19]:
# The recipe registered as "coordinates":
# {
# "input_data": "components_mview",
# "fitter": "bfs",
# "fitter.ngetter": "brute_force",
# "fitter.na": "vuset",
# "fitter.checker": "switch",
# "fitter.queue": "fifo",
# "fitter.ngetter.dgetter": "metric",
# "fitter.ngetter.dgetter.metric": "euclidean_r",
# }
print(
cluster.ClusteringBuilder(
points,
registered_recipe_key="coordinates",
).build()
)
Clustering(input_data=InputDataExtComponentsMemoryview, fitter=FitterExtBFS(ngetter=NeighboursGetterExtBruteForce(dgetter=DistanceGetterExtMetric(metric=MetricExtEuclideanReduced), sorted=False, selfcounting=True), na=NeighboursExtVectorCPPUnorderedSet, nb=NeighboursExtVectorCPPUnorderedSet, checker=SimilarityCheckerExtSwitchContains, queue=QueueExtFIFOQueue), predictor=None)
It is possible to modify a given recipe with the explicit use of keyword arguments. Note, that in this case dots should be replaced by a double underscore:
[20]:
print(
cluster.ClusteringBuilder(
points,
registered_recipe_key="coordinates",
fitter__ngetter__dgetter__metric="precomputed"
).build()
)
Clustering(input_data=InputDataExtComponentsMemoryview, fitter=FitterExtBFS(ngetter=NeighboursGetterExtBruteForce(dgetter=DistanceGetterExtMetric(metric=MetricExtPrecomputed), sorted=False, selfcounting=True), na=NeighboursExtVectorCPPUnorderedSet, nb=NeighboursExtVectorCPPUnorderedSet, checker=SimilarityCheckerExtSwitchContains, queue=QueueExtFIFOQueue), predictor=None)
The above modification makes the recipe match the "distances"
recipe. Other readily available recipes are "neighbourhoods"
and "sorted_neighbourhoods"
. The users are encouraged to modify those to their liking or to define their own custom recipes.
Newly defined types that should be usable in a builder controlled aggregation need to implement a classmethod get_builder_kwargs() -> list
that provides a list of component identifiers necessary to initialise an object of itself.