Skip to content

BUG: df.to_sql() lose datetime precision when uploading data to SQL Server temp table #62995

@ebj-jyskebank-dk

Description

@ebj-jyskebank-dk

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd
from my_module import engine # A SQLAlchemy engine (mssql+pyodbc)
from datetime import datetime, date, time, MAXYEAR
from sqlalchemy.dialects import mssql
from sqlalchemy import text

TODAY = datetime.combine(date.today(), time.min)
MAX_DATE = datetime.combine(date(MAXYEAR, 12, 31), time.max)

print(f"Today: {TODAY}, Max Date: {MAX_DATE}")

df = pd.DataFrame(
    {
        "tmstmp": [TODAY, MAX_DATE],
        "foo": [1, 2],
    },
)
print(df)

for table_name, msg in [
    ("pandas_test", "Result from pandas_test table:"),
    ("#pandas_test", "Result from #pandas_test temp table:"),
]:
    df.to_sql(
        name=table_name,
        schema="dbo",
        con=engine,
        if_exists="replace",
        index=False,
        dtype={"tmstmp": mssql.DATETIME2(6)},
    )

    with engine.begin() as conn:
        res = conn.execute(
            text(
                f"SELECT CONVERT(varchar, tmstmp) as tmstmp_str, foo FROM {table_name};"
            )
        ).fetchall()

    print(f"{msg:<36} {res}")

Issue Description

When using df.to_sql() to upload data to a temporary table in Azure SQL Server, datetime millisecond precision is lost. When uploading to a normal (non-temporary) table it behaves as expected.

The example outputs:

Today: 2025-11-05 00:00:00, Max Date: 9999-12-31 23:59:59.999999
                       tmstmp  foo
0         2025-11-05 00:00:00    1
1  9999-12-31 23:59:59.999999    2
Result from pandas_test table:       [('2025-11-05 00:00:00.000000', 1), ('9999-12-31 23:59:59.999999', 2)]
Result from #pandas_test temp table: [('2025-11-05 00:00:00.000000', 1), ('9999-12-31 23:59:59.000000', 2)]

In my reproducible example I used SQLAlchemy and CONVERT the timestamp to text before reading it, to ensure no conversion/data loss happened during reading the data. This demonstrates that the problem happens when to_sql() writes the data.

Expected Behavior

Millisecond precision should be conserved when uploading data to a SQL Server table which supports such precision (in this case DATETIME2(6)) for both ordinary and temporary tables.

Installed Versions

INSTALLED VERSIONS

commit : 9c8bc3e
python : 3.14.0
python-bits : 64
OS : Linux
OS-release : 6.6.87.2-microsoft-standard-WSL2
Version : #1 SMP PREEMPT_DYNAMIC Thu Jun 5 18:30:46 UTC 2025
machine : x86_64
processor :
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : C.UTF-8

pandas : 2.3.3
numpy : 2.3.4
pytz : 2025.2
dateutil : 2.9.0.post0
pip : 25.2
Cython : None
sphinx : None
IPython : 9.6.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.9.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.6
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 22.0.0
pyreadstat : None
pytest : 8.4.2
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : 2.0.44
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.2
qtpy : None
pyqt5 : None

Metadata

Metadata

Assignees

No one assigned

    Labels

    BugNeeds TriageIssue that has not been reviewed by a pandas team member

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions