Skip to content
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions doc/source/user_guide/duplicates.rst
Original file line number Diff line number Diff line change
Expand Up @@ -109,8 +109,6 @@ with the same label.
Disallowing Duplicate Labels
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 1.2.0

As noted above, handling duplicates is an important feature when reading in raw
data. That said, you may want to avoid introducing duplicates as part of a data
processing pipeline (from methods like :meth:`pandas.concat`,
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/groupby.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1264,8 +1264,6 @@ with
Numba accelerated routines
--------------------------

.. versionadded:: 1.1

If `Numba <https://numba.pydata.org/>`__ is installed as an optional dependency, the ``transform`` and
``aggregate`` methods support ``engine='numba'`` and ``engine_kwargs`` arguments.
See :ref:`enhancing performance with Numba <enhancingperf.numba>` for general usage of the arguments
Expand Down
38 changes: 5 additions & 33 deletions doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -156,13 +156,9 @@ dtype : Type name or dict of column -> type, default ``None``
Data type for data or columns. E.g. ``{'a': np.float64, 'b': np.int32, 'c': 'Int64'}``
Use ``str`` or ``object`` together with suitable ``na_values`` settings to preserve
and not interpret dtype. If converters are specified, they will be applied INSTEAD
of dtype conversion.

.. versionadded:: 1.5.0

Support for defaultdict was added. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.
of dtype conversion. Specify a defaultdict as input where
the default determines the dtype of the columns which are not explicitly
listed.

dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
Expand All @@ -177,12 +173,8 @@ dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFram
engine : {``'c'``, ``'python'``, ``'pyarrow'``}
Parser engine to use. The C and pyarrow engines are faster, while the python engine
is currently more feature-complete. Multithreading is currently only supported by
the pyarrow engine.

.. versionadded:: 1.4.0

The "pyarrow" engine was added as an *experimental* engine, and some features
are unsupported, or may not work correctly, with this engine.
the pyarrow engine. Some features of "pyarrow" engine
are unsupported or may not work correctly.
converters : dict, default ``None``
Dict of functions for converting values in certain columns. Keys can either be
integers or column labels.
Expand Down Expand Up @@ -355,8 +347,6 @@ on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
- 'warn', print a warning when a bad line is encountered and skip that line.
- 'skip', skip bad lines without raising or warning when they are encountered.

.. versionadded:: 1.3.0

.. _io.dtypes:

Specifying column data types
Expand Down Expand Up @@ -935,8 +925,6 @@ DD/MM/YYYY instead. For convenience, a ``dayfirst`` keyword is provided:
Writing CSVs to binary file objects
+++++++++++++++++++++++++++++++++++

.. versionadded:: 1.2.0

``df.to_csv(..., mode="wb")`` allows writing a CSV to a file object
opened binary mode. In most cases, it is not necessary to specify
``mode`` as pandas will auto-detect whether the file object is
Expand Down Expand Up @@ -1122,8 +1110,6 @@ You can elect to skip bad lines:
data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
pd.read_csv(StringIO(data), on_bad_lines="skip")

.. versionadded:: 1.4.0

Or pass a callable function to handle the bad line if ``engine="python"``.
The bad line will be a list of strings that was split by the ``sep``:

Expand Down Expand Up @@ -1547,8 +1533,6 @@ functions - the following example shows reading a CSV file:

df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")

.. versionadded:: 1.3.0

A custom header can be sent alongside HTTP(s) requests by passing a dictionary
of header key value mappings to the ``storage_options`` keyword argument as shown below:

Expand Down Expand Up @@ -1600,8 +1584,6 @@ More sample configurations and documentation can be found at `S3Fs documentation
If you do *not* have S3 credentials, you can still access public
data by specifying an anonymous connection, such as

.. versionadded:: 1.2.0

.. code-block:: python

pd.read_csv(
Expand Down Expand Up @@ -2535,8 +2517,6 @@ Links can be extracted from cells along with the text using ``extract_links="all
df[("GitHub", None)]
df[("GitHub", None)].str[1]

.. versionadded:: 1.5.0

.. _io.html:

Writing to HTML files
Expand Down Expand Up @@ -2726,8 +2706,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
LaTeX
-----

.. versionadded:: 1.3.0

Currently there are no methods to read from LaTeX, only output methods.

Writing to LaTeX files
Expand Down Expand Up @@ -2766,8 +2744,6 @@ XML
Reading XML
'''''''''''

.. versionadded:: 1.3.0

The top-level :func:`~pandas.io.xml.read_xml` function can accept an XML
string/file/URL and will parse nodes and attributes into a pandas ``DataFrame``.

Expand Down Expand Up @@ -3093,8 +3069,6 @@ supports parsing such sizeable files using `lxml's iterparse`_ and `etree's iter
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
without holding entire tree in memory.

.. versionadded:: 1.5.0

.. _`lxml's iterparse`: https://lxml.de/3.2/parsing.html#iterparse-and-iterwalk
.. _`etree's iterparse`: https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse

Expand Down Expand Up @@ -3133,8 +3107,6 @@ of reading in Wikipedia's very large (12 GB+) latest article data dump.
Writing XML
'''''''''''

.. versionadded:: 1.3.0

``DataFrame`` objects have an instance method ``to_xml`` which renders the
contents of the ``DataFrame`` as an XML document.

Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/reshaping.rst
Original file line number Diff line number Diff line change
Expand Up @@ -478,8 +478,6 @@ The values can be cast to a different type using the ``dtype`` argument.
pd.get_dummies(df, dtype=np.float32).dtypes
.. versionadded:: 1.5.0

:func:`~pandas.from_dummies` converts the output of :func:`~pandas.get_dummies` back into
a :class:`Series` of categorical values from indicator values.

Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1964,8 +1964,6 @@ Note the use of ``'start'`` for ``origin`` on the last example. In that case, ``
Backward resample
~~~~~~~~~~~~~~~~~

.. versionadded:: 1.3.0

Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given ``freq``. The backward resample sets ``closed`` to ``'right'`` by default since the last value should be considered as the edge point for the last bin.

We can set ``origin`` to ``'end'``. The value for a specific ``Timestamp`` index stands for the resample result from the current ``Timestamp`` minus ``freq`` to the current ``Timestamp`` with a right close.
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -645,8 +645,6 @@ each point:
If a categorical column is passed to ``c``, then a discrete colorbar will be produced:

.. versionadded:: 1.3.0

.. ipython:: python
@savefig scatter_plot_categorical.png
Expand Down
8 changes: 0 additions & 8 deletions doc/source/user_guide/window.rst
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,6 @@ which will first group the data by the specified keys and then perform a windowi
to compute the rolling sums to preserve accuracy as much as possible.


.. versionadded:: 1.3.0

Some windowing operations also support the ``method='table'`` option in the constructor which
performs the windowing operation over an entire :class:`DataFrame` instead of a single column at a time.
This can provide a useful performance benefit for a :class:`DataFrame` with many columns
Expand All @@ -100,8 +98,6 @@ be calculated with :meth:`~Rolling.apply` by specifying a separate column of wei
df = pd.DataFrame([[1, 2, 0.6], [2, 3, 0.4], [3, 4, 0.2], [4, 5, 0.7]])
df.rolling(2, method="table", min_periods=0).apply(weighted_mean, raw=True, engine="numba") # noqa: E501
.. versionadded:: 1.3

Some windowing operations also support an ``online`` method after constructing a windowing object
which returns a new object that supports passing in new :class:`DataFrame` or :class:`Series` objects
to continue the windowing calculation with the new values (i.e. online calculations).
Expand Down Expand Up @@ -182,8 +178,6 @@ By default the labels are set to the right edge of the window, but a
This can also be applied to datetime-like indices.

.. versionadded:: 1.3.0

.. ipython:: python
df = pd.DataFrame(
Expand Down Expand Up @@ -365,8 +359,6 @@ The ``engine_kwargs`` argument is a dictionary of keyword arguments that will be
These keyword arguments will be applied to *both* the passed function (if a standard Python function)
and the apply for loop over each window.

.. versionadded:: 1.3.0

``mean``, ``median``, ``max``, ``min``, and ``sum`` also support the ``engine`` and ``engine_kwargs`` arguments.

.. _window.cov_corr:
Expand Down
4 changes: 0 additions & 4 deletions pandas/_libs/parsers.pyx
Original file line number Diff line number Diff line change
Expand Up @@ -329,10 +329,6 @@ cdef class TextReader:
# source: StringIO or file object
..versionchange:: 1.2.0
removed 'compression', 'memory_map', and 'encoding' argument.
These arguments are outsourced to CParserWrapper.
'source' has to be a file handle.
"""

cdef:
Expand Down
4 changes: 0 additions & 4 deletions pandas/_testing/asserters.py
Original file line number Diff line number Diff line change
Expand Up @@ -931,14 +931,10 @@ def assert_series_equal(
assertion message.
check_index : bool, default True
Whether to check index equivalence. If False, then compare only values.
.. versionadded:: 1.3.0
check_like : bool, default False
If True, ignore the order of the index. Must be False if check_index is False.
Note: same labels must be with the same data.
.. versionadded:: 1.5.0
See Also
--------
testing.assert_index_equal : Check that two Indexes are equal.
Expand Down
2 changes: 0 additions & 2 deletions pandas/core/algorithms.py
Original file line number Diff line number Diff line change
Expand Up @@ -697,8 +697,6 @@ def factorize(
If True, the sentinel -1 will be used for NaN values. If False,
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.

.. versionadded:: 1.5.0
{size_hint}\

Returns
Expand Down
2 changes: 0 additions & 2 deletions pandas/core/arrays/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1592,8 +1592,6 @@ def factorize(
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.
.. versionadded:: 1.5.0
Returns
-------
codes : ndarray
Expand Down
2 changes: 0 additions & 2 deletions pandas/core/arrays/masked.py
Original file line number Diff line number Diff line change
Expand Up @@ -1277,8 +1277,6 @@ def factorize(
NaN values will be encoded as non-negative integers and will not drop the
NaN from the uniques of the values.
.. versionadded:: 1.5.0
Returns
-------
codes : ndarray
Expand Down
2 changes: 0 additions & 2 deletions pandas/core/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -639,8 +639,6 @@ def fill_missing_names(names: Sequence[Hashable | None]) -> list[Hashable]:
"""
If a name is missing then replace it by level_n, where n is the count
.. versionadded:: 1.4.0
Parameters
----------
names : list-like
Expand Down
Loading
Loading