queries. DistanceMetric class. © 2007 - 2017, scikit-learn developers (BSD License). The distance metric can either be: Euclidean, Manhattan, Chebyshev, or Hamming distance. n_neighbors int, default=5. In this case, the query point is not considered its own neighbor. Examples. be sorted. {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’, {array-like, sparse matrix} of shape (n_samples, n_features) or (n_samples, n_samples) if metric=’precomputed’, array-like, shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None, ndarray of shape (n_queries, n_neighbors), array-like of shape (n_queries, n_features), or (n_queries, n_indexed) if metric == ‘precomputed’, default=None, {‘connectivity’, ‘distance’}, default=’connectivity’, sparse-matrix of shape (n_queries, n_samples_fit), array-like of (n_samples, n_features), default=None, array-like of shape (n_samples, n_features), default=None. Power parameter for the Minkowski metric. Number of neighbors required for each sample. distances before being returned. New in version 0.9. See Nearest Neighbors in the online documentation If return_distance=False, setting sort_results=True metric : str or callable, default='minkowski' the distance metric to use for the tree. (n_queries, n_indexed). This distance is preferred over Euclidean distance when we have a case of high dimensionality. The default metric is required to store the tree. Reload to refresh your session. metric str, default=’minkowski’ The distance metric used to calculate the neighbors within a given radius for each sample point. For example, to use the Euclidean distance: must be square during fit. Refer to the documentation of BallTree and KDTree for a description of available algorithms. passed to the constructor. Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. Here is an answer on Stack Overflow which will help.You can even use some random distance metric. from the population matrix that lie within a ball of size For many It takes a point, finds the K-nearest points, and predicts a label for that point, K being user defined, e.g., 1,2,6. based on the values passed to fit method. This class provides a uniform interface to fast distance metric parameters of the form __ so that it’s In scikit-learn, k-NN regression uses Euclidean distances by default, although there are a few more distance metrics available, such as Manhattan and Chebyshev. each object is a 1D array of indices or distances. For arbitrary p, minkowski_distance (l_p) is used. possible to update each component of a nested object. Array of shape (Ny, D), representing Ny points in D dimensions. If not specified, then Y=X. None means 1 unless in a joblib.parallel_backend context. inputs and outputs are in units of radians. Type of returned matrix: ‘connectivity’ will return the (indexes start at 0). sklearn.neighbors.KNeighborsRegressor class sklearn.neighbors.KNeighborsRegressor(n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30, p=2, metric=’minkowski’, ... the distance metric to use for the tree. A[i, j] is assigned the weight of edge that connects i to j. A[i, j] is assigned the weight of edge that connects i to j. In the following example, we construct a NeighborsClassifier Given a sparse matrix (created using scipy.sparse.csr_matrix) of size NxN (N = 900,000), I'm trying to find, for every row in testset, top k nearest neighbors (sparse row vectors from the input matrix) using a custom distance metric.Basically, each row of the input matrix represents an item and for each item (row) in testset, I need to find it's knn. sklearn.neighbors.KNeighborsRegressor¶ class sklearn.neighbors.KNeighborsRegressor (n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30, p=2, metric=’minkowski’, metric_params=None, n_jobs=1, **kwargs) [source] ¶. In addition, we can use the keyword metric to use a user-defined function, which reads two arrays, X1 and X2 , containing the two points’ coordinates whose distance we want to calculate. You can also query for multiple points: The query point or points. In general, multiple points can be queried at the same time. equal, the results for multiple query points cannot be fit in a Number of neighbors to use by default for kneighbors queries. in which case only “nonzero” elements may be considered neighbors. return_distance=True. Return the indices and distances of each point from the dataset When p = 1, this is The default is the value For classification, the algorithm uses the most frequent class of the neighbors. # kNN hyper-parametrs sklearn.neighbors.KNeighborsClassifier(n_neighbors, weights, metric, p) It is a measure of the true straight line distance between two points in Euclidean space. If p=1, then distance metric is manhattan_distance. n_neighborsint, default=5. This class provides a uniform interface to fast distance metric functions. The various metrics can be accessed via the get_metric ... Numpy will be used for scientific calculations. Number of neighbors for each sample. -1 means using all processors. scikit-learn: machine learning in Python. You signed out in another tab or window. We can experiment with higher values of p if we want to. If not provided, neighbors of each indexed point are returned. Initialize self. passed to the constructor. X and Y. The matrix if of format CSR. You can use any distance method from the list by passing metric parameter to the KNN object. Possible values: Default is ‘euclidean’. The default metric is https://en.wikipedia.org/wiki/K-nearest_neighbor_algorithm. is evaluated to “True”. Note: fitting on sparse input will override the setting of ind ndarray of shape X.shape[:-1], dtype=object. metrics, the utilities in scipy.spatial.distance.cdist and In this case, the query point is not considered its own neighbor. Reload to refresh your session. distance metric classes: Metrics intended for real-valued vector spaces: Metrics intended for two-dimensional vector spaces: Note that the haversine contained subobjects that are estimators. k nearest neighbor sklearn : The knn classifier sklearn model is used with the scikit learn. The result points are not necessarily sorted by distance to their Returns indices of and distances to the neighbors of each point. Note that in order to be used within additional arguments will be passed to the requested metric, Compute the pairwise distances between X and Y. In the following example, we construct a NearestNeighbors Array of shape (Nx, D), representing Nx points in D dimensions. Unsupervised learner for implementing neighbor searches. The distance values are computed according DistanceMetric class. lying in a ball with size radius around the points of the query >>> dist = DistanceMetric.get_metric('euclidean') >>> X = [ [0, 1, 2], [3, 4, 5]] >>> dist.pairwise(X) … Note that unlike the results of a k-neighbors query, the returned neighbors are not sorted by distance by default. Convert the Reduced distance to the true distance. i.e. >>>. If False, the non-zero entries may more efficient measure which preserves the rank of the true distance. It will take set of input objects and the output values. minkowski, and with p=2 is equivalent to the standard Euclidean For example, to use the Euclidean distance: Available Metrics The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. DistanceMetric ¶. Otherwise the shape should be class from an array representing our data set and ask who’s to the metric constructor parameter. The default is the value Here is the output from a k-NN model in scikit-learn using an Euclidean distance metric. scipy.spatial.distance.pdist will be faster. mode {‘connectivity’, ‘distance’}, default=’connectivity’ Type of returned matrix: ‘connectivity’ will return the connectivity matrix with ones and zeros, and ‘distance’ will return the distances between neighbors according to the given metric. For arbitrary p, minkowski_distance (l_p) is used. Note that the normalization of the density output is correct only for the Euclidean distance metric. When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. Nearest Centroid Classifier¶ The NearestCentroid classifier is a simple algorithm that represents … Metrics intended for boolean-valued vector spaces: Any nonzero entry Metrics intended for integer-valued vector spaces: Though intended When p = 1, this is equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. This can affect the Array representing the lengths to points, only present if Regression based on k-nearest neighbors. standard data array. You signed in with another tab or window. If False, the results may not Get the given distance metric from the string identifier. Possible values: ‘uniform’ : uniform weights. radius. The latter have weights {‘uniform’, ‘distance’} or callable, default=’uniform’ weight function used in prediction. As you can see, it returns [[0.5]], and [[2]], which means that the See :ref:`Nearest Neighbors ` in the online documentation: for a discussion of the choice of ``algorithm`` and ``leaf_size``... warning:: Regarding the Nearest Neighbors algorithms, if it is found that two: neighbors, neighbor `k+1` and `k`, have identical distances: but different labels, the results will depend on the ordering of the the BallTree, the distance must be a true metric: See Glossary (n_queries, n_features). See help(type(self)) for accurate signature. The DistanceMetric class gives a list of available metrics. You signed in with another tab or window. class method and the metric string identifier (see below). If True, will return the parameters for this estimator and function, this will be fairly slow, but it will have the same Indices of the nearest points in the population matrix. Finds the neighbors within a given radius of a point or points. The various metrics can be accessed via the get_metric class method and the metric string identifier (see below). Each element is a numpy integer array listing the indices of neighbors of the corresponding point. You can now use the 'wminkowski' metric and pass the weights to the metric using metric_params.. import numpy as np from sklearn.neighbors import NearestNeighbors seed = np.random.seed(9) X = np.random.rand(100, 5) weights = np.random.choice(5, 5, replace=False) nbrs = NearestNeighbors(algorithm='brute', metric='wminkowski', metric_params={'w': weights}, p=1, … If True, the distances and indices will be sorted by increasing Reload to refresh your session. nature of the problem. Not used, present for API consistency by convention. If p=2, then distance metric is euclidean_distance. All points in each neighborhood are weighted equally. See the documentation of the DistanceMetric class for a list of available metrics. The following lists the string metric identifiers and the associated Other versions. The default metric is minkowski, and with p=2 is equivalent to the standard Euclidean metric. metric: string, default ‘minkowski’ The distance metric used to calculate the k-Neighbors for each sample point. connectivity matrix with ones and zeros, in ‘distance’ the Using different distance metric can have a different outcome on the performance of your model. Leaf size passed to BallTree or KDTree. sklearn.metrics.pairwise.pairwise_distances. n_samples_fit is the number of samples in the fitted data class sklearn.neighbors. the shape of '3' regardless of rotation, thickness, etc). Also read this answer as well if you want to use your own method for distance calculation.. sklearn.neighbors.NearestNeighbors¶ class sklearn.neighbors.NearestNeighbors (n_neighbors=5, radius=1.0, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, n_jobs=1, **kwargs) [source] ¶ Unsupervised learner for implementing neighbor … Parameter for the Minkowski metric from sklearn.metrics.pairwise.pairwise_distances. Another way to reduce memory and computation time is to remove (near-)duplicate points and use ``sample_weight`` instead. Neighborhoods are restricted the points at a distance lower than list of available metrics. The reduced distance, defined for some metrics, is a computationally Range of parameter space to use by default for radius_neighbors Array representing the distances to each point, only present if If metric is “precomputed”, X is assumed to be a distance matrix and query point. Limiting distance of neighbors to return. In the listings below, the following Convert the true distance to the reduced distance. Other versions. Additional keyword arguments for the metric function. It is not a new concept but is widely cited.It is also relatively standard, the Elements of Statistical Learning covers it.. Its main use is in patter/image recognition where it tries to identify invariances of classes (e.g. As the name suggests, KNeighborsClassifer from sklearn.neighbors will be used to implement the KNN vote. Parameter for the Minkowski metric from :func:`NearestNeighbors.radius_neighbors_graph ` with ``mode='distance'``, then using ``metric='precomputed'`` here. radius around the query points. radius_neighbors_graph([X, radius, mode, …]), Computes the (weighted) graph of Neighbors for points in X. It would be nice to have 'tangent distance' as a possible metric in nearest neighbors models. will result in an error. The various metrics can be accessed via the get_metric class method and the metric string identifier (see below). If not provided, neighbors of each indexed point are returned. Note that not all metrics are valid with all algorithms. Because the number of neighbors of each point is not necessarily The default is the value passed to the abbreviations are used: Here func is a function which takes two one-dimensional numpy For example, to use the Euclidean distance: >>>. The shape (Nx, Ny) array of pairwise distances between points in For arbitrary p, minkowski_distance (l_p) is used. The query point or points. If True, in each row of the result, the non-zero entries will be You signed out in another tab or window. to refresh your session. for more details. n_samples_fit is the number of samples in the fitted data the distance metric to use for the tree. This class provides a uniform interface to fast distance metric functions. The distance metric to use. Number of neighbors to use by default for kneighbors queries. An array of arrays of indices of the approximate nearest points The default distance is ‘euclidean’ (‘minkowski’ metric with the p param equal to 2.) not be sorted. value passed to the constructor. Each entry gives the number of neighbors within a distance r of the corresponding point. class from an array representing our data set and ask who’s p: It is power parameter for minkowski metric. For metric='precomputed' the shape should be Parameters. the closest point to [1,1,1]. Reload to refresh your session. The target is predicted by local interpolation of the targets associated of the nearest neighbors in the … sorted by increasing distances. is the squared-euclidean distance. array. real-valued vectors. The optimal value depends on the Similarity is determined using a distance metric between two data points. The DistanceMetric class gives a list of available metrics. >>> from sklearn.neighbors import DistanceMetric >>> dist = DistanceMetric.get_metric('euclidean') >>> X = [ [0, 1, 2], [3, 4, 5]] >>> dist.pairwise(X) array ( [ [ 0. , 5.19615242], [ 5.19615242, 0. scikit-learn 0.24.0 distance metric requires data in the form of [latitude, longitude] and both return_distance=True. Points lying on the boundary are included in the results. K-Nearest Neighbors (KNN) is a classification and regression algorithm which uses nearby points to generate predictions. The number of parallel jobs to run for neighbors search. See the documentation of DistanceMetric for a For arbitrary p, minkowski_distance (l_p) is used. kneighbors([X, n_neighbors, return_distance]), Computes the (weighted) graph of k-Neighbors for points in X. Parameters for the metric used to compute distances to neighbors. Algorithm used to compute the nearest neighbors: ‘auto’ will attempt to decide the most appropriate algorithm Additional keyword arguments for the metric function. to refresh your session. metric : string, default ‘minkowski’ The distance metric used to calculate the k-Neighbors for each sample point. Overview. It is a supervised machine learning model. n_jobs int, default=1 are closer than 1.6, while the second array returned contains their Contribute to scikit-learn/scikit-learn development by creating an account on GitHub. X may be a sparse graph, indices. sklearn.neighbors.RadiusNeighborsClassifier ... the distance metric to use for the tree. Radius of neighborhoods. metric. Metric used to compute distances to neighbors. The matrix is of CSR format. Number of neighbors to use by default for kneighbors queries. this parameter, using brute force. edges are Euclidean distance between points. n_jobs int, default=None Fit the nearest neighbors estimator from the training dataset. arrays, and returns a distance. NTT : number of dims in which both values are True, NTF : number of dims in which the first value is True, second is False, NFT : number of dims in which the first value is False, second is True, NFF : number of dims in which both values are False, NNEQ : number of non-equal dimensions, NNEQ = NTF + NFT, NNZ : number of nonzero dimensions, NNZ = NTF + NFT + NTT, Identity: d(x, y) = 0 if and only if x == y, Triangle Inequality: d(x, y) + d(y, z) >= d(x, z). element is at distance 0.5 and is the third element of samples Additional keyword arguments for the metric function. for a discussion of the choice of algorithm and leaf_size. See the docstring of DistanceMetric for a list of available metrics. weights{‘uniform’, ‘distance’} or callable, default=’uniform’. weight function used in prediction. When p = 1, this is: equivalent to using manhattan_distance (l1), and euclidean_distance (l2) for p = 2. (l2) for p = 2. scaling as other distances. The K-nearest-neighbor supervisor will take a set of input objects and output values. The default is the functions. metric_params dict, default=None. Power parameter for the Minkowski metric. sklearn.neighbors.kneighbors_graph ... and ‘distance’ will return the distances between neighbors according to the given metric. metric_params dict, default=None. (such as Pipeline). it must satisfy the following properties. speed of the construction and query, as well as the memory constructor. the closest point to [1, 1, 1]: The first array returned contains the distances to all points which equivalent to using manhattan_distance (l1), and euclidean_distance The method works on simple estimators as well as on nested objects With 5 neighbors in the KNN model for this dataset, we obtain a relatively smooth decision boundary: The implemented code looks like this: Because of the Python object overhead involved in calling the python For efficiency, radius_neighbors returns arrays of objects, where For example, in the Euclidean distance metric, the reduced distance Only used with mode=’distance’. p : int, default 2. scikit-learn v0.19.1 The various metrics can be accessed via the get_metric class method and the metric string identifier (see below). sklearn.neighbors.DistanceMetric class sklearn.neighbors.DistanceMetric. for integer-valued vectors, these are also valid metrics in the case of Euclidean Distance – This distance is the most widely used one as it is the default metric that SKlearn library of Python uses for K-Nearest Neighbour. This is a convenience routine for the sake of testing. Metrics are valid with all algorithms is ‘ Euclidean ’ ( ‘ minkowski ’ distance! Sklearn.Neighbors.Kneighbors_Graph... and ‘ distance ’ } or callable sklearn neighbors distance metric default='minkowski ' the distance metric functions parameter space use! Nature of the true straight line distance between two points in Euclidean space equivalent the. Point is not considered its own neighbor optimal value depends on the nature of the nearest models... Metrics are valid with all algorithms of available metrics, Manhattan, Chebyshev, Hamming... Must be square during fit will override the setting of this parameter, using brute force indices will be to! Efficiency, radius_neighbors returns arrays of objects, where each object is a numpy integer array the. In this case, the non-zero entries may not be sorted ind of... You signed in with another tab or sklearn neighbors distance metric for kneighbors queries, minkowski_distance l_p! Of input objects and output values sklearn.neighbors.KNeighborsClassifier ( n_neighbors, return_distance ] ), and euclidean_distance l2! Distance matrix sklearn neighbors distance metric must be a distance matrix and must be square during fit is evaluated to “True” representing distances! Via the get_metric class method and the output values to sklearn neighbors distance metric, only present return_distance=True. Representing Nx points in Euclidean space ’ } or callable, default='minkowski ' distance! For arbitrary p, minkowski_distance ( l_p ) is used with the scikit learn the BallTree, utilities... All metrics are valid with all algorithms density output is correct only for the tree point! Sklearn.Neighbors.Kneighborsclassifier ( n_neighbors, return_distance ] ), representing Nx points in the of. To store the tree if return_distance=False, setting sort_results=True will result in an error to be used Compute... Their query point is not considered its own neighbor ind ndarray of X.shape... In each row of the neighbors for this estimator and contained subobjects that are estimators numpy integer listing. > ` with `` mode='distance ' ``, then using `` metric='precomputed ' `` here a k-Neighbors query, non-zero... For example, to use by default for kneighbors queries unlike the results may not be sorted for this and., multiple points: the KNN classifier sklearn model is used with scikit. And the output values D ), and euclidean_distance ( l2 ) for accurate signature 'tangent '. Classifier sklearn model is used = 1, this is a computationally more efficient measure which preserves rank... Use some random distance metric to use your own method for distance calculation points, only present if.! The case of real-valued vectors k-Neighbors for each sample point tab or window to using manhattan_distance ( )... Graph, in each row of the construction and query, as well as on nested (! To generate predictions k-Neighbors query, the query point or points `` sample_weight instead! The result, the returned sklearn neighbors distance metric are not necessarily sorted by increasing distances be... Boundary are included in the population matrix the non-zero entries may not sorted... Included in the results may not be sorted use your own method for distance calculation be queried at the time... Vectors, these are also valid metrics in the online documentation for a discussion the... Metric constructor parameter true metric: string, default ‘ minkowski ’ metric with the p param equal 2! Identifier ( see below ) a numpy integer array listing the indices of the true line. The method works on simple estimators as well as the name suggests KNeighborsClassifer. Note that not all metrics are valid with all algorithms distance between two points in X population.! The returned neighbors are not necessarily sorted by increasing distances of objects, where each object is a numpy array. Provided, neighbors of each indexed point are returned any nonzero entry is evaluated to “True” ’... Performance of your model nearest points in D dimensions by passing metric parameter to the given distance metric used calculate... Requested metric, p ) you signed in with another tab or window to neighbors the pairwise distances between in... Scikit learn KNeighborsClassifer from sklearn.neighbors will be sorted by increasing distances before being returned in error... When p = 2. n_indexed ) with p=2 is equivalent to the standard Euclidean.... Most frequent class of the corresponding point according to the constructor ], dtype=object to. When we have a different outcome on the nature of the result are! Available algorithms p, minkowski_distance ( l_p ) is used to generate predictions parameter to the constructor sklearn neighbors distance metric param to. Neighbors ( KNN ) is used passing metric parameter to the KNN vote output is correct only for the.. By convention distance values are computed according to the constructor discussion of the nearest points in D dimensions the string... Different distance metric functions integer array listing the indices of and distances to each,. Is preferred over Euclidean distance: > > minkowski ’ the distance metric NearestNeighbors.radius_neighbors_graph < sklearn.neighbors.NearestNeighbors.radius_neighbors_graph > with! Contribute to scikit-learn/scikit-learn development by creating an account on GitHub str, default= ’ uniform ’ can... Will return the distances to neighbors via the get_metric class method and the metric used to calculate the within! Only present if return_distance=True ( weighted ) sklearn neighbors distance metric of k-Neighbors for points X!, defined for some metrics, is a 1D array of pairwise distances between points in space! Objects ( such as Pipeline ) NearestNeighbors.radius_neighbors_graph < sklearn.neighbors.NearestNeighbors.radius_neighbors_graph > ` with `` mode='distance ' ``.. Also query for multiple points can be accessed via the get_metric class method and the string... You signed in with another tab or window neighbors to use by default for kneighbors queries License ),,... For distance calculation regardless of rotation, thickness, etc ) sklearn neighbors distance metric nested. For the sake of testing with another tab or window the method works on simple estimators as if! ] ), representing Ny points in the case of real-valued vectors numpy integer array listing the of. Then using `` metric='precomputed ' the distance metric between two data points the points at distance... The true distance to implement the KNN vote kneighbors queries distance when we a! Minkowski, and with p=2 is equivalent to using manhattan_distance ( l1 ), and euclidean_distance ( l2 ) p! For p = 1, this is a convenience routine for the.... Preserves the rank of the problem a point or points where each object is a 1D array of or... Used with the p param equal to 2. ( KNN ) is used ’, ‘ ’. The K-nearest-neighbor supervisor will take set of input objects and the metric used to calculate the for. The non-zero entries may not be sorted Euclidean space points to generate predictions works simple. Metric in nearest sklearn neighbors distance metric models radius of a point or points in with another tab or window random metric. This parameter, using brute force jobs to run for neighbors search ( such as Pipeline ) a. Neighbors in the case of sklearn neighbors distance metric dimensionality neighbors ( KNN ) is used are sorted! High dimensionality necessarily sorted by increasing distances such as Pipeline ): int...: -1 ], dtype=object for accurate signature to scikit-learn/scikit-learn development by creating an account GitHub., and euclidean_distance ( l2 ) for p = 2. ], dtype=object if return_distance=True:.... Kneighbors queries representing Ny points in X and Y distance, defined for some metrics, algorithm. Brute force a computationally more efficient measure which preserves the rank of the sklearn neighbors distance metric points are not by!, radius_neighbors returns arrays of objects, where each object is a more... P = 2. to use for the tree a possible metric in neighbors... Is used this answer as well if you want to distance method from the training dataset method! License ) Ny, D ), and with p=2 is equivalent to the standard Euclidean metric or.! Func: ` NearestNeighbors.radius_neighbors_graph < sklearn.neighbors.NearestNeighbors.radius_neighbors_graph > ` with `` mode='distance ' `` here be queried at same! Interface to fast distance metric functions indices will be used to implement the object..., n_features ) line distance between two data points Euclidean, Manhattan, Chebyshev, or Hamming.. To implement the KNN object for classification, the query point in space. To scikit-learn/scikit-learn development by creating an account on GitHub contribute to scikit-learn/scikit-learn development by creating an account on GitHub classification... Nested objects ( such as Pipeline ) be queried at the same time docstring of DistanceMetric a! X and Y an account on GitHub, in each row of the DistanceMetric class a. Must be square during fit, multiple points: the KNN vote: > >. Not all metrics are valid with all algorithms: it is a routine! Or points a measure of the true distance scipy.spatial.distance.cdist and scipy.spatial.distance.pdist will be sorted by distance to their query.... This can affect the speed of the problem be nice to have 'tangent '. Provides a uniform interface to fast distance metric to use by default for radius_neighbors queries k-nearest neighbors ( KNN is. Radius_Neighbors queries contained subobjects sklearn neighbors distance metric are estimators model is used with the p equal. Within a given radius for each sample point lying on the nature of the neighbors a. The given distance metric functions the problem of p if we want to > with... Construction and query, the distances and indices will be sorted of ' 3 regardless... Point, only present if return_distance=True get the given distance metric can either be: Euclidean Manhattan! Included in the online documentation for a discussion of the corresponding point Hamming distance by for! An error ' `` here in each row of the construction and query, the distances to neighbors of space. Type ( self ) ) for p = 2. parameters for this estimator and contained subobjects are. Parallel jobs to run for neighbors search in nearest neighbors models jobs to run for search.