Skip to content

Commit f4851e5

Browse files
DOC: Remove ..versionadded directives before 2.0 (#63035)
Co-authored-by: Richard Shadrach <45562402+rhshadrach@users.noreply.github.com>
1 parent a2a5f87 commit f4851e5

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+19
-522
lines changed

doc/source/user_guide/duplicates.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,8 +109,6 @@ with the same label.
109109
Disallowing Duplicate Labels
110110
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111111

112-
.. versionadded:: 1.2.0
113-
114112
As noted above, handling duplicates is an important feature when reading in raw
115113
data. That said, you may want to avoid introducing duplicates as part of a data
116114
processing pipeline (from methods like :meth:`pandas.concat`,

doc/source/user_guide/groupby.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1264,8 +1264,6 @@ with
12641264
Numba accelerated routines
12651265
--------------------------
12661266

1267-
.. versionadded:: 1.1
1268-
12691267
If `Numba <https://numba.pydata.org/>`__ is installed as an optional dependency, the ``transform`` and
12701268
``aggregate`` methods support ``engine='numba'`` and ``engine_kwargs`` arguments.
12711269
See :ref:`enhancing performance with Numba <enhancingperf.numba>` for general usage of the arguments

doc/source/user_guide/io.rst

Lines changed: 5 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -156,13 +156,9 @@ dtype : Type name or dict of column -> type, default ``None``
156156
Data type for data or columns. E.g. ``{'a': np.float64, 'b': np.int32, 'c': 'Int64'}``
157157
Use ``str`` or ``object`` together with suitable ``na_values`` settings to preserve
158158
and not interpret dtype. If converters are specified, they will be applied INSTEAD
159-
of dtype conversion.
160-
161-
.. versionadded:: 1.5.0
162-
163-
Support for defaultdict was added. Specify a defaultdict as input where
164-
the default determines the dtype of the columns which are not explicitly
165-
listed.
159+
of dtype conversion. Specify a defaultdict as input where
160+
the default determines the dtype of the columns which are not explicitly
161+
listed.
166162

167163
dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
168164
Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
@@ -177,12 +173,8 @@ dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFram
177173
engine : {``'c'``, ``'python'``, ``'pyarrow'``}
178174
Parser engine to use. The C and pyarrow engines are faster, while the python engine
179175
is currently more feature-complete. Multithreading is currently only supported by
180-
the pyarrow engine.
181-
182-
.. versionadded:: 1.4.0
183-
184-
The "pyarrow" engine was added as an *experimental* engine, and some features
185-
are unsupported, or may not work correctly, with this engine.
176+
the pyarrow engine. Some features of the "pyarrow" engine
177+
are unsupported or may not work correctly.
186178
converters : dict, default ``None``
187179
Dict of functions for converting values in certain columns. Keys can either be
188180
integers or column labels.
@@ -355,8 +347,6 @@ on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
355347
- 'warn', print a warning when a bad line is encountered and skip that line.
356348
- 'skip', skip bad lines without raising or warning when they are encountered.
357349

358-
.. versionadded:: 1.3.0
359-
360350
.. _io.dtypes:
361351

362352
Specifying column data types
@@ -935,8 +925,6 @@ DD/MM/YYYY instead. For convenience, a ``dayfirst`` keyword is provided:
935925
Writing CSVs to binary file objects
936926
+++++++++++++++++++++++++++++++++++
937927

938-
.. versionadded:: 1.2.0
939-
940928
``df.to_csv(..., mode="wb")`` allows writing a CSV to a file object
941929
opened binary mode. In most cases, it is not necessary to specify
942930
``mode`` as pandas will auto-detect whether the file object is
@@ -1122,8 +1110,6 @@ You can elect to skip bad lines:
11221110
data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
11231111
pd.read_csv(StringIO(data), on_bad_lines="skip")
11241112
1125-
.. versionadded:: 1.4.0
1126-
11271113
Or pass a callable function to handle the bad line if ``engine="python"``.
11281114
The bad line will be a list of strings that was split by the ``sep``:
11291115

@@ -1547,8 +1533,6 @@ functions - the following example shows reading a CSV file:
15471533
15481534
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
15491535
1550-
.. versionadded:: 1.3.0
1551-
15521536
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
15531537
of header key value mappings to the ``storage_options`` keyword argument as shown below:
15541538

@@ -1600,8 +1584,6 @@ More sample configurations and documentation can be found at `S3Fs documentation
16001584
If you do *not* have S3 credentials, you can still access public
16011585
data by specifying an anonymous connection, such as
16021586

1603-
.. versionadded:: 1.2.0
1604-
16051587
.. code-block:: python
16061588
16071589
pd.read_csv(
@@ -2535,8 +2517,6 @@ Links can be extracted from cells along with the text using ``extract_links="all
25352517
df[("GitHub", None)]
25362518
df[("GitHub", None)].str[1]
25372519
2538-
.. versionadded:: 1.5.0
2539-
25402520
.. _io.html:
25412521

25422522
Writing to HTML files
@@ -2726,8 +2706,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
27262706
LaTeX
27272707
-----
27282708

2729-
.. versionadded:: 1.3.0
2730-
27312709
Currently there are no methods to read from LaTeX, only output methods.
27322710

27332711
Writing to LaTeX files
@@ -2766,8 +2744,6 @@ XML
27662744
Reading XML
27672745
'''''''''''
27682746

2769-
.. versionadded:: 1.3.0
2770-
27712747
The top-level :func:`~pandas.io.xml.read_xml` function can accept an XML
27722748
string/file/URL and will parse nodes and attributes into a pandas ``DataFrame``.
27732749

@@ -3093,8 +3069,6 @@ supports parsing such sizeable files using `lxml's iterparse`_ and `etree's iter
30933069
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
30943070
without holding entire tree in memory.
30953071

3096-
.. versionadded:: 1.5.0
3097-
30983072
.. _`lxml's iterparse`: https://lxml.de/3.2/parsing.html#iterparse-and-iterwalk
30993073
.. _`etree's iterparse`: https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse
31003074

@@ -3133,8 +3107,6 @@ of reading in Wikipedia's very large (12 GB+) latest article data dump.
31333107
Writing XML
31343108
'''''''''''
31353109

3136-
.. versionadded:: 1.3.0
3137-
31383110
``DataFrame`` objects have an instance method ``to_xml`` which renders the
31393111
contents of the ``DataFrame`` as an XML document.
31403112

doc/source/user_guide/reshaping.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -478,8 +478,6 @@ The values can be cast to a different type using the ``dtype`` argument.
478478
479479
pd.get_dummies(df, dtype=np.float32).dtypes
480480
481-
.. versionadded:: 1.5.0
482-
483481
:func:`~pandas.from_dummies` converts the output of :func:`~pandas.get_dummies` back into
484482
a :class:`Series` of categorical values from indicator values.
485483

doc/source/user_guide/timeseries.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1964,8 +1964,6 @@ Note the use of ``'start'`` for ``origin`` on the last example. In that case, ``
19641964
Backward resample
19651965
~~~~~~~~~~~~~~~~~
19661966

1967-
.. versionadded:: 1.3.0
1968-
19691967
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given ``freq``. The backward resample sets ``closed`` to ``'right'`` by default since the last value should be considered as the edge point for the last bin.
19701968

19711969
We can set ``origin`` to ``'end'``. The value for a specific ``Timestamp`` index stands for the resample result from the current ``Timestamp`` minus ``freq`` to the current ``Timestamp`` with a right close.

doc/source/user_guide/visualization.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -645,8 +645,6 @@ each point:
645645
646646
If a categorical column is passed to ``c``, then a discrete colorbar will be produced:
647647

648-
.. versionadded:: 1.3.0
649-
650648
.. ipython:: python
651649
652650
@savefig scatter_plot_categorical.png

doc/source/user_guide/window.rst

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -77,8 +77,6 @@ which will first group the data by the specified keys and then perform a windowi
7777
to compute the rolling sums to preserve accuracy as much as possible.
7878

7979

80-
.. versionadded:: 1.3.0
81-
8280
Some windowing operations also support the ``method='table'`` option in the constructor which
8381
performs the windowing operation over an entire :class:`DataFrame` instead of a single column at a time.
8482
This can provide a useful performance benefit for a :class:`DataFrame` with many columns
@@ -100,8 +98,6 @@ be calculated with :meth:`~Rolling.apply` by specifying a separate column of wei
10098
df = pd.DataFrame([[1, 2, 0.6], [2, 3, 0.4], [3, 4, 0.2], [4, 5, 0.7]])
10199
df.rolling(2, method="table", min_periods=0).apply(weighted_mean, raw=True, engine="numba") # noqa: E501
102100
103-
.. versionadded:: 1.3
104-
105101
Some windowing operations also support an ``online`` method after constructing a windowing object
106102
which returns a new object that supports passing in new :class:`DataFrame` or :class:`Series` objects
107103
to continue the windowing calculation with the new values (i.e. online calculations).
@@ -182,8 +178,6 @@ By default the labels are set to the right edge of the window, but a
182178
183179
This can also be applied to datetime-like indices.
184180

185-
.. versionadded:: 1.3.0
186-
187181
.. ipython:: python
188182
189183
df = pd.DataFrame(
@@ -365,8 +359,6 @@ The ``engine_kwargs`` argument is a dictionary of keyword arguments that will be
365359
These keyword arguments will be applied to *both* the passed function (if a standard Python function)
366360
and the apply for loop over each window.
367361

368-
.. versionadded:: 1.3.0
369-
370362
``mean``, ``median``, ``max``, ``min``, and ``sum`` also support the ``engine`` and ``engine_kwargs`` arguments.
371363

372364
.. _window.cov_corr:

pandas/_libs/parsers.pyx

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -329,10 +329,6 @@ cdef class TextReader:
329329
330330
# source: StringIO or file object
331331
332-
..versionchange:: 1.2.0
333-
removed 'compression', 'memory_map', and 'encoding' argument.
334-
These arguments are outsourced to CParserWrapper.
335-
'source' has to be a file handle.
336332
"""
337333

338334
cdef:

pandas/_testing/asserters.py

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -931,14 +931,10 @@ def assert_series_equal(
931931
assertion message.
932932
check_index : bool, default True
933933
Whether to check index equivalence. If False, then compare only values.
934-
935-
.. versionadded:: 1.3.0
936934
check_like : bool, default False
937935
If True, ignore the order of the index. Must be False if check_index is False.
938936
Note: same labels must be with the same data.
939937
940-
.. versionadded:: 1.5.0
941-
942938
See Also
943939
--------
944940
testing.assert_index_equal : Check that two Indexes are equal.

pandas/core/algorithms.py

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -697,8 +697,6 @@ def factorize(
697697
If True, the sentinel -1 will be used for NaN values. If False,
698698
NaN values will be encoded as non-negative integers and will not drop the
699699
NaN from the uniques of the values.
700-
701-
.. versionadded:: 1.5.0
702700
{size_hint}\
703701
704702
Returns

0 commit comments

Comments
 (0)