Skip to content

Commit afe99dd

Browse files
committed
Remove ..versionadded before 2.0
1 parent b80503f commit afe99dd

File tree

8 files changed

+4
-56
lines changed

8 files changed

+4
-56
lines changed

doc/source/user_guide/duplicates.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,8 +109,6 @@ with the same label.
109109
Disallowing Duplicate Labels
110110
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111111

112-
.. versionadded:: 1.2.0
113-
114112
As noted above, handling duplicates is an important feature when reading in raw
115113
data. That said, you may want to avoid introducing duplicates as part of a data
116114
processing pipeline (from methods like :meth:`pandas.concat`,

doc/source/user_guide/groupby.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1264,8 +1264,6 @@ with
12641264
Numba accelerated routines
12651265
--------------------------
12661266

1267-
.. versionadded:: 1.1
1268-
12691267
If `Numba <https://numba.pydata.org/>`__ is installed as an optional dependency, the ``transform`` and
12701268
``aggregate`` methods support ``engine='numba'`` and ``engine_kwargs`` arguments.
12711269
See :ref:`enhancing performance with Numba <enhancingperf.numba>` for general usage of the arguments

doc/source/user_guide/io.rst

Lines changed: 2 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -158,12 +158,6 @@ dtype : Type name or dict of column -> type, default ``None``
158158
and not interpret dtype. If converters are specified, they will be applied INSTEAD
159159
of dtype conversion.
160160

161-
.. versionadded:: 1.5.0
162-
163-
Support for defaultdict was added. Specify a defaultdict as input where
164-
the default determines the dtype of the columns which are not explicitly
165-
listed.
166-
167161
dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
168162
Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
169163
arrays, nullable dtypes are used for all dtypes that have a nullable
@@ -177,12 +171,8 @@ dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFram
177171
engine : {``'c'``, ``'python'``, ``'pyarrow'``}
178172
Parser engine to use. The C and pyarrow engines are faster, while the python engine
179173
is currently more feature-complete. Multithreading is currently only supported by
180-
the pyarrow engine.
181-
182-
.. versionadded:: 1.4.0
183-
184-
The "pyarrow" engine was added as an *experimental* engine, and some features
185-
are unsupported, or may not work correctly, with this engine.
174+
the pyarrow engine.The "pyarrow" engine was added as an *experimental* engine,
175+
and some features are unsupported, or may not work correctly, with this engine.
186176
converters : dict, default ``None``
187177
Dict of functions for converting values in certain columns. Keys can either be
188178
integers or column labels.
@@ -357,8 +347,6 @@ on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
357347
- 'warn', print a warning when a bad line is encountered and skip that line.
358348
- 'skip', skip bad lines without raising or warning when they are encountered.
359349

360-
.. versionadded:: 1.3.0
361-
362350
.. _io.dtypes:
363351

364352
Specifying column data types
@@ -937,8 +925,6 @@ DD/MM/YYYY instead. For convenience, a ``dayfirst`` keyword is provided:
937925
Writing CSVs to binary file objects
938926
+++++++++++++++++++++++++++++++++++
939927

940-
.. versionadded:: 1.2.0
941-
942928
``df.to_csv(..., mode="wb")`` allows writing a CSV to a file object
943929
opened binary mode. In most cases, it is not necessary to specify
944930
``mode`` as pandas will auto-detect whether the file object is
@@ -1124,8 +1110,6 @@ You can elect to skip bad lines:
11241110
data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
11251111
pd.read_csv(StringIO(data), on_bad_lines="skip")
11261112
1127-
.. versionadded:: 1.4.0
1128-
11291113
Or pass a callable function to handle the bad line if ``engine="python"``.
11301114
The bad line will be a list of strings that was split by the ``sep``:
11311115

@@ -1553,8 +1537,6 @@ functions - the following example shows reading a CSV file:
15531537
15541538
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
15551539
1556-
.. versionadded:: 1.3.0
1557-
15581540
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
15591541
of header key value mappings to the ``storage_options`` keyword argument as shown below:
15601542

@@ -1606,8 +1588,6 @@ More sample configurations and documentation can be found at `S3Fs documentation
16061588
If you do *not* have S3 credentials, you can still access public
16071589
data by specifying an anonymous connection, such as
16081590

1609-
.. versionadded:: 1.2.0
1610-
16111591
.. code-block:: python
16121592
16131593
pd.read_csv(
@@ -2541,8 +2521,6 @@ Links can be extracted from cells along with the text using ``extract_links="all
25412521
df[("GitHub", None)]
25422522
df[("GitHub", None)].str[1]
25432523
2544-
.. versionadded:: 1.5.0
2545-
25462524
.. _io.html:
25472525

25482526
Writing to HTML files
@@ -2732,8 +2710,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
27322710
LaTeX
27332711
-----
27342712

2735-
.. versionadded:: 1.3.0
2736-
27372713
Currently there are no methods to read from LaTeX, only output methods.
27382714

27392715
Writing to LaTeX files
@@ -2772,8 +2748,6 @@ XML
27722748
Reading XML
27732749
'''''''''''
27742750

2775-
.. versionadded:: 1.3.0
2776-
27772751
The top-level :func:`~pandas.io.xml.read_xml` function can accept an XML
27782752
string/file/URL and will parse nodes and attributes into a pandas ``DataFrame``.
27792753

@@ -3099,8 +3073,6 @@ supports parsing such sizeable files using `lxml's iterparse`_ and `etree's iter
30993073
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
31003074
without holding entire tree in memory.
31013075

3102-
.. versionadded:: 1.5.0
3103-
31043076
.. _`lxml's iterparse`: https://lxml.de/3.2/parsing.html#iterparse-and-iterwalk
31053077
.. _`etree's iterparse`: https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse
31063078

@@ -3139,8 +3111,6 @@ of reading in Wikipedia's very large (12 GB+) latest article data dump.
31393111
Writing XML
31403112
'''''''''''
31413113

3142-
.. versionadded:: 1.3.0
3143-
31443114
``DataFrame`` objects have an instance method ``to_xml`` which renders the
31453115
contents of the ``DataFrame`` as an XML document.
31463116

doc/source/user_guide/reshaping.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -478,8 +478,6 @@ The values can be cast to a different type using the ``dtype`` argument.
478478
479479
pd.get_dummies(df, dtype=np.float32).dtypes
480480
481-
.. versionadded:: 1.5.0
482-
483481
:func:`~pandas.from_dummies` converts the output of :func:`~pandas.get_dummies` back into
484482
a :class:`Series` of categorical values from indicator values.
485483

doc/source/user_guide/text.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -335,8 +335,6 @@ regular expression object will raise a ``ValueError``.
335335
``removeprefix`` and ``removesuffix`` have the same effect as ``str.removeprefix`` and ``str.removesuffix`` added in
336336
`Python 3.9 <https://docs.python.org/3/library/stdtypes.html#str.removeprefix>`__:
337337

338-
.. versionadded:: 1.4.0
339-
340338
.. ipython:: python
341339
342340
s = pd.Series(["str_foo", "str_bar", "no_prefix"])

doc/source/user_guide/timeseries.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1964,8 +1964,6 @@ Note the use of ``'start'`` for ``origin`` on the last example. In that case, ``
19641964
Backward resample
19651965
~~~~~~~~~~~~~~~~~
19661966

1967-
.. versionadded:: 1.3.0
1968-
19691967
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given ``freq``. The backward resample sets ``closed`` to ``'right'`` by default since the last value should be considered as the edge point for the last bin.
19701968

19711969
We can set ``origin`` to ``'end'``. The value for a specific ``Timestamp`` index stands for the resample result from the current ``Timestamp`` minus ``freq`` to the current ``Timestamp`` with a right close.

doc/source/user_guide/visualization.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -649,8 +649,6 @@ each point:
649649
650650
If a categorical column is passed to ``c``, then a discrete colorbar will be produced:
651651

652-
.. versionadded:: 1.3.0
653-
654652
.. ipython:: python
655653
656654
@savefig scatter_plot_categorical.png

doc/source/user_guide/window.rst

Lines changed: 2 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -76,9 +76,6 @@ which will first group the data by the specified keys and then perform a windowi
7676
<https://en.wikipedia.org/wiki/Kahan_summation_algorithm>`__ is used
7777
to compute the rolling sums to preserve accuracy as much as possible.
7878

79-
80-
.. versionadded:: 1.3.0
81-
8279
Some windowing operations also support the ``method='table'`` option in the constructor which
8380
performs the windowing operation over an entire :class:`DataFrame` instead of a single column at a time.
8481
This can provide a useful performance benefit for a :class:`DataFrame` with many columns
@@ -100,8 +97,6 @@ be calculated with :meth:`~Rolling.apply` by specifying a separate column of wei
10097
df = pd.DataFrame([[1, 2, 0.6], [2, 3, 0.4], [3, 4, 0.2], [4, 5, 0.7]])
10198
df.rolling(2, method="table", min_periods=0).apply(weighted_mean, raw=True, engine="numba") # noqa: E501
10299
103-
.. versionadded:: 1.3
104-
105100
Some windowing operations also support an ``online`` method after constructing a windowing object
106101
which returns a new object that supports passing in new :class:`DataFrame` or :class:`Series` objects
107102
to continue the windowing calculation with the new values (i.e. online calculations).
@@ -182,8 +177,6 @@ By default the labels are set to the right edge of the window, but a
182177
183178
This can also be applied to datetime-like indices.
184179

185-
.. versionadded:: 1.3.0
186-
187180
.. ipython:: python
188181
189182
df = pd.DataFrame(
@@ -363,11 +356,8 @@ Numba will be applied in potentially two routines:
363356
The ``engine_kwargs`` argument is a dictionary of keyword arguments that will be passed into the
364357
`numba.jit decorator <https://numba.readthedocs.io/en/stable/user/jit.html>`__.
365358
These keyword arguments will be applied to *both* the passed function (if a standard Python function)
366-
and the apply for loop over each window.
367-
368-
.. versionadded:: 1.3.0
369-
370-
``mean``, ``median``, ``max``, ``min``, and ``sum`` also support the ``engine`` and ``engine_kwargs`` arguments.
359+
and the apply for loop over each window. ``mean``, ``median``, ``max``, ``min``, and ``sum``
360+
also support the ``engine`` and ``engine_kwargs`` arguments.
371361

372362
.. _window.cov_corr:
373363

0 commit comments

Comments
 (0)