Skip to content

Commit 3be960e

Browse files
authored
Merge branch 'main' into fix-dst-resample-issue
2 parents e4a6616 + 3157d07 commit 3be960e

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+485
-637
lines changed

doc/source/user_guide/duplicates.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -109,8 +109,6 @@ with the same label.
109109
Disallowing Duplicate Labels
110110
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111111

112-
.. versionadded:: 1.2.0
113-
114112
As noted above, handling duplicates is an important feature when reading in raw
115113
data. That said, you may want to avoid introducing duplicates as part of a data
116114
processing pipeline (from methods like :meth:`pandas.concat`,

doc/source/user_guide/groupby.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1264,8 +1264,6 @@ with
12641264
Numba accelerated routines
12651265
--------------------------
12661266

1267-
.. versionadded:: 1.1
1268-
12691267
If `Numba <https://numba.pydata.org/>`__ is installed as an optional dependency, the ``transform`` and
12701268
``aggregate`` methods support ``engine='numba'`` and ``engine_kwargs`` arguments.
12711269
See :ref:`enhancing performance with Numba <enhancingperf.numba>` for general usage of the arguments

doc/source/user_guide/io.rst

Lines changed: 5 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -156,13 +156,9 @@ dtype : Type name or dict of column -> type, default ``None``
156156
Data type for data or columns. E.g. ``{'a': np.float64, 'b': np.int32, 'c': 'Int64'}``
157157
Use ``str`` or ``object`` together with suitable ``na_values`` settings to preserve
158158
and not interpret dtype. If converters are specified, they will be applied INSTEAD
159-
of dtype conversion.
160-
161-
.. versionadded:: 1.5.0
162-
163-
Support for defaultdict was added. Specify a defaultdict as input where
164-
the default determines the dtype of the columns which are not explicitly
165-
listed.
159+
of dtype conversion. Specify a defaultdict as input where
160+
the default determines the dtype of the columns which are not explicitly
161+
listed.
166162

167163
dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFrames
168164
Which dtype_backend to use, e.g. whether a DataFrame should have NumPy
@@ -177,12 +173,8 @@ dtype_backend : {"numpy_nullable", "pyarrow"}, defaults to NumPy backed DataFram
177173
engine : {``'c'``, ``'python'``, ``'pyarrow'``}
178174
Parser engine to use. The C and pyarrow engines are faster, while the python engine
179175
is currently more feature-complete. Multithreading is currently only supported by
180-
the pyarrow engine.
181-
182-
.. versionadded:: 1.4.0
183-
184-
The "pyarrow" engine was added as an *experimental* engine, and some features
185-
are unsupported, or may not work correctly, with this engine.
176+
the pyarrow engine. Some features of the "pyarrow" engine
177+
are unsupported or may not work correctly.
186178
converters : dict, default ``None``
187179
Dict of functions for converting values in certain columns. Keys can either be
188180
integers or column labels.
@@ -355,8 +347,6 @@ on_bad_lines : {{'error', 'warn', 'skip'}}, default 'error'
355347
- 'warn', print a warning when a bad line is encountered and skip that line.
356348
- 'skip', skip bad lines without raising or warning when they are encountered.
357349

358-
.. versionadded:: 1.3.0
359-
360350
.. _io.dtypes:
361351

362352
Specifying column data types
@@ -935,8 +925,6 @@ DD/MM/YYYY instead. For convenience, a ``dayfirst`` keyword is provided:
935925
Writing CSVs to binary file objects
936926
+++++++++++++++++++++++++++++++++++
937927

938-
.. versionadded:: 1.2.0
939-
940928
``df.to_csv(..., mode="wb")`` allows writing a CSV to a file object
941929
opened binary mode. In most cases, it is not necessary to specify
942930
``mode`` as pandas will auto-detect whether the file object is
@@ -1122,8 +1110,6 @@ You can elect to skip bad lines:
11221110
data = "a,b,c\n1,2,3\n4,5,6,7\n8,9,10"
11231111
pd.read_csv(StringIO(data), on_bad_lines="skip")
11241112
1125-
.. versionadded:: 1.4.0
1126-
11271113
Or pass a callable function to handle the bad line if ``engine="python"``.
11281114
The bad line will be a list of strings that was split by the ``sep``:
11291115

@@ -1547,8 +1533,6 @@ functions - the following example shows reading a CSV file:
15471533
15481534
df = pd.read_csv("https://download.bls.gov/pub/time.series/cu/cu.item", sep="\t")
15491535
1550-
.. versionadded:: 1.3.0
1551-
15521536
A custom header can be sent alongside HTTP(s) requests by passing a dictionary
15531537
of header key value mappings to the ``storage_options`` keyword argument as shown below:
15541538

@@ -1600,8 +1584,6 @@ More sample configurations and documentation can be found at `S3Fs documentation
16001584
If you do *not* have S3 credentials, you can still access public
16011585
data by specifying an anonymous connection, such as
16021586

1603-
.. versionadded:: 1.2.0
1604-
16051587
.. code-block:: python
16061588
16071589
pd.read_csv(
@@ -2535,8 +2517,6 @@ Links can be extracted from cells along with the text using ``extract_links="all
25352517
df[("GitHub", None)]
25362518
df[("GitHub", None)].str[1]
25372519
2538-
.. versionadded:: 1.5.0
2539-
25402520
.. _io.html:
25412521

25422522
Writing to HTML files
@@ -2726,8 +2706,6 @@ parse HTML tables in the top-level pandas io function ``read_html``.
27262706
LaTeX
27272707
-----
27282708

2729-
.. versionadded:: 1.3.0
2730-
27312709
Currently there are no methods to read from LaTeX, only output methods.
27322710

27332711
Writing to LaTeX files
@@ -2766,8 +2744,6 @@ XML
27662744
Reading XML
27672745
'''''''''''
27682746

2769-
.. versionadded:: 1.3.0
2770-
27712747
The top-level :func:`~pandas.io.xml.read_xml` function can accept an XML
27722748
string/file/URL and will parse nodes and attributes into a pandas ``DataFrame``.
27732749

@@ -3093,8 +3069,6 @@ supports parsing such sizeable files using `lxml's iterparse`_ and `etree's iter
30933069
which are memory-efficient methods to iterate through an XML tree and extract specific elements and attributes.
30943070
without holding entire tree in memory.
30953071

3096-
.. versionadded:: 1.5.0
3097-
30983072
.. _`lxml's iterparse`: https://lxml.de/3.2/parsing.html#iterparse-and-iterwalk
30993073
.. _`etree's iterparse`: https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.iterparse
31003074

@@ -3133,8 +3107,6 @@ of reading in Wikipedia's very large (12 GB+) latest article data dump.
31333107
Writing XML
31343108
'''''''''''
31353109

3136-
.. versionadded:: 1.3.0
3137-
31383110
``DataFrame`` objects have an instance method ``to_xml`` which renders the
31393111
contents of the ``DataFrame`` as an XML document.
31403112

doc/source/user_guide/reshaping.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -478,8 +478,6 @@ The values can be cast to a different type using the ``dtype`` argument.
478478
479479
pd.get_dummies(df, dtype=np.float32).dtypes
480480
481-
.. versionadded:: 1.5.0
482-
483481
:func:`~pandas.from_dummies` converts the output of :func:`~pandas.get_dummies` back into
484482
a :class:`Series` of categorical values from indicator values.
485483

doc/source/user_guide/timeseries.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1964,8 +1964,6 @@ Note the use of ``'start'`` for ``origin`` on the last example. In that case, ``
19641964
Backward resample
19651965
~~~~~~~~~~~~~~~~~
19661966

1967-
.. versionadded:: 1.3.0
1968-
19691967
Instead of adjusting the beginning of bins, sometimes we need to fix the end of the bins to make a backward resample with a given ``freq``. The backward resample sets ``closed`` to ``'right'`` by default since the last value should be considered as the edge point for the last bin.
19701968

19711969
We can set ``origin`` to ``'end'``. The value for a specific ``Timestamp`` index stands for the resample result from the current ``Timestamp`` minus ``freq`` to the current ``Timestamp`` with a right close.

doc/source/user_guide/visualization.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -645,8 +645,6 @@ each point:
645645
646646
If a categorical column is passed to ``c``, then a discrete colorbar will be produced:
647647

648-
.. versionadded:: 1.3.0
649-
650648
.. ipython:: python
651649
652650
@savefig scatter_plot_categorical.png

doc/source/user_guide/window.rst

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -77,8 +77,6 @@ which will first group the data by the specified keys and then perform a windowi
7777
to compute the rolling sums to preserve accuracy as much as possible.
7878

7979

80-
.. versionadded:: 1.3.0
81-
8280
Some windowing operations also support the ``method='table'`` option in the constructor which
8381
performs the windowing operation over an entire :class:`DataFrame` instead of a single column at a time.
8482
This can provide a useful performance benefit for a :class:`DataFrame` with many columns
@@ -100,8 +98,6 @@ be calculated with :meth:`~Rolling.apply` by specifying a separate column of wei
10098
df = pd.DataFrame([[1, 2, 0.6], [2, 3, 0.4], [3, 4, 0.2], [4, 5, 0.7]])
10199
df.rolling(2, method="table", min_periods=0).apply(weighted_mean, raw=True, engine="numba") # noqa: E501
102100
103-
.. versionadded:: 1.3
104-
105101
Some windowing operations also support an ``online`` method after constructing a windowing object
106102
which returns a new object that supports passing in new :class:`DataFrame` or :class:`Series` objects
107103
to continue the windowing calculation with the new values (i.e. online calculations).
@@ -182,8 +178,6 @@ By default the labels are set to the right edge of the window, but a
182178
183179
This can also be applied to datetime-like indices.
184180

185-
.. versionadded:: 1.3.0
186-
187181
.. ipython:: python
188182
189183
df = pd.DataFrame(
@@ -365,8 +359,6 @@ The ``engine_kwargs`` argument is a dictionary of keyword arguments that will be
365359
These keyword arguments will be applied to *both* the passed function (if a standard Python function)
366360
and the apply for loop over each window.
367361

368-
.. versionadded:: 1.3.0
369-
370362
``mean``, ``median``, ``max``, ``min``, and ``sum`` also support the ``engine`` and ``engine_kwargs`` arguments.
371363

372364
.. _window.cov_corr:

doc/source/whatsnew/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,7 @@ Version 2.3
2424
.. toctree::
2525
:maxdepth: 2
2626

27+
v2.3.4
2728
v2.3.3
2829
v2.3.2
2930
v2.3.1

doc/source/whatsnew/v2.3.4.rst

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
.. _whatsnew_234:
2+
3+
What's new in 2.3.4 (November XX, 2025)
4+
----------------------------------------
5+
6+
These are the changes in pandas 2.3.4. See :ref:`release` for a full changelog
7+
including other versions of pandas.
8+
9+
{{ header }}
10+
11+
.. ---------------------------------------------------------------------------
12+
13+
Bug fixes
14+
^^^^^^^^^
15+
- Bug in :meth:`DataFrame.__getitem__` returning modified columns when called with ``slice`` in Python 3.12 (:issue:`57500`)
16+
17+
.. ---------------------------------------------------------------------------
18+
.. _whatsnew_234.contributors:
19+
20+
Contributors
21+
~~~~~~~~~~~~

doc/source/whatsnew/v3.0.0.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1132,7 +1132,6 @@ Interval
11321132

11331133
Indexing
11341134
^^^^^^^^
1135-
- Bug in :meth:`DataFrame.__getitem__` returning modified columns when called with ``slice`` in Python 3.12 (:issue:`57500`)
11361135
- Bug in :meth:`DataFrame.__getitem__` when slicing a :class:`DataFrame` with many rows raised an ``OverflowError`` (:issue:`59531`)
11371136
- Bug in :meth:`DataFrame.__setitem__` on an empty :class:`DataFrame` with a tuple corrupting the frame (:issue:`54385`)
11381137
- Bug in :meth:`DataFrame.from_records` throwing a ``ValueError`` when passed an empty list in ``index`` (:issue:`58594`)
@@ -1260,6 +1259,7 @@ Groupby/resample/rolling
12601259
- Bug in :meth:`Rolling.skew` incorrectly computing skewness for windows following outliers due to numerical instability. The calculation now properly handles catastrophic cancellation by recomputing affected windows (:issue:`47461`)
12611260
- Bug in :meth:`Series.resample` could raise when the date range ended shortly before a non-existent time. (:issue:`58380`)
12621261
- Bug in :meth:`Series.resample` raising error when resampling non-nanosecond resolutions out of bounds for nanosecond precision (:issue:`57427`)
1262+
- Bug in :meth:`Series.rolling.var` and :meth:`Series.rolling.std` computing incorrect results due to numerical instability. (:issue:`47721`, :issue:`52407`, :issue:`54518`, :issue:`55343`)
12631263

12641264
Reshaping
12651265
^^^^^^^^^

0 commit comments

Comments
 (0)