You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: appendix.Rmd
+67-4Lines changed: 67 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -1310,6 +1310,53 @@ For $n \in \mathbb{N}_{0}$, the Chebyshev polynomial $T_{n}$ of degree $n$ is th
1310
1310
$$T_{n}(x) := \cos (n \arccos(x)).$$
1311
1311
```
1312
1312
1313
+
1314
+
1315
+
## Polynomial approximation of $\log(x)$ {#polyapprox-log}
1316
+
1317
+
1318
+
```{theorem, polyapprox-log}
1319
+
Let $\beta\in(0,1]$, $\epsilon\in(0,{1}/{6}]$. Then there exists a polynomial $\tilde{S}$
1320
+
of degree $O({\frac{1}{\beta}\log (\frac{1}{\epsilon} )})$ such that $|\tilde{S}(x)-\frac{\log_b(x)}{3\log_b(2/\beta)}|\leq\epsilon$ for all $ x\in [\beta,1]$ and base $b \in \mathbb{N}$, and for all $ x\in [-1,1]$ $1/2
1321
+
\leq \tilde{S}(x) = \tilde{S}(-x) \leq 1/2$.
1322
+
```
1323
+
1324
+
1325
+
```{proof}
1326
+
Recall lemma~\ref{lemma:serieslog}.
1327
+
We follow the same steps of the proof of \cite[Lemma 11]{distributional}.
1328
+
We consider the standard Taylor expansion of $\frac{\log(x)}{3\log(2/\beta)}$. centered in $x_0=1$, which is $f(x)=\frac{1}{3 \log(2/\beta)}\sum_{l \geq 1} \frac{(-1)^{1+n}(-1+x)^n}{n}$. We use this polynomial in \cite[Corollary 16]{distributional} with the choice of $\epsilon=\eta/2$, $x_0=1$, $r=1-\beta$, $\nu = \frac{\beta}{2}$. This Corollary gives us another polynomial $S \in \mathbb{C}[x]$ of degree $O(\frac{1}{\beta}\log(\frac{1}{\eta}))$ with the following properties:
Now we show that indeed $B=1/3$ in Lemma~\ref{lemma:cor66} (following the steps of the original proof). We have to consider $f(x_0 +x) = \frac{1}{3 \log(2/\beta)}\sum_{l=0}^{\infty} = \frac{a_l (-1+x+x_0)^l}{l}$, and find a bound for $\sum_{l=0}^\infty (r+\nu)^l|a_l|$, which in our case (as $a_l = (-1)^{1+l}$) is $\frac{1}{3\log(2/\beta)}\sum_{l=0}^\infty (1-\beta/2)^l$.
1335
+
1336
+
To make the polynomial even we define $\widetilde{S} = S(x) + S(-x)$.
1337
+
Now we show that the error is bounded in the interval $[\beta, 1]$, and the value of the polynomial is bounded in the interval $[-1,1]$.
1338
+
1339
+
Using the properties of Corollary 16, along with the triangle inequality, we can see that
Where we used respectively the fact that the function is even (first equality), and that $\widetilde{S}$ can be bounded in function of $S(x)$ using triangle inequality (second inequality) and the properties of the polynomial expansion derived before.
1352
+
If the polynomial expansion has complex parts, we can discard them.
1353
+
```
1354
+
1355
+
1356
+
1357
+
1358
+
1359
+
1313
1360
## Polynomial approximation of $1/x$ {#polyapprox-1overx}
1314
1361
We are going to approximate $1/x$ using Chebychev polynomial, with some additional tricks, which will be used to decrease the degree of the polynomial approximation. As
# Contributions and acknowledgements {#appendix-contributors}
1613
1660
1614
-
These are my first lecture notes in Quantum Machine Learning (QML) and quantum algorithms. They spurred out from my old blog, back in 2016/2017. Then, they took a more concrete form out of my Ph.D. thesis (which I made at [IRIF](https://irif.fr) with the support of [Atos](https://atos.net), which I thank), and now are in this extended form with the hope to serve the future researchers in QML. I am not an expert in the broad field of "quantum computing", and these lecture notes are an attempt (while quite prolonged over time) of collecting useful knowledge for new researcher in quantum computing. While I strive to be as precise as the lecture notes of [Ronald de Wolf](https://homepages.cwi.nl/~rdewolf/qcnotes.pdf) and [Andrew Childs](https://www.cs.umd.edu/~amchilds/qa/qa.pdf), I know this work is still far from them. Please be indulgent, and help! For instance by signaling imprecisions, errors, and things that can be made more clear.
1661
+
These are my first lecture notes in quantum algorithms for machine learning. They spurred out from my old blog, back in 2016/2017. Then, they took a more concrete form out of my Ph.D. thesis (which I made at [IRIF](https://irif.fr) with the support of [Atos](https://atos.net), which I thank), and now are in this extended form with the hope to serve the future researchers in QML. I am not (yet!) an expert in the broad field of "quantum computing", and these lecture notes are an attempt of collecting useful knowledge for new researcher in quantum computing. While I strive to be as precise as the lecture notes of [Ronald de Wolf](https://homepages.cwi.nl/~rdewolf/qcnotes.pdf) and [Andrew Childs](https://www.cs.umd.edu/~amchilds/qa/qa.pdf), I know this work is still far from them. Please be indulgent, and help! For instance by signaling imprecisions, errors, and things that can be made more clear.
1615
1662
1616
1663
1617
1664
If you want to give me any feedback, feel free to write me at "scinawa - at - luongo - dot - pro". Or contact me on [Twitter](https://twitter.com/scinawa).
@@ -1620,10 +1667,24 @@ In sparse order, I would like to thank [Dong Ping Zhang](www.dongpingzhang.com),
Copy file name to clipboardExpand all lines: book.bib
+58-10Lines changed: 58 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -5,6 +5,30 @@ @misc{adversary-simple
5
5
month = {October},
6
6
publisher = {Carnegie Mellon University},
7
7
}
8
+
@article{giurgica2022low,
9
+
title={Low-depth amplitude estimation on a trapped-ion quantum computer},
10
+
author={Giurgica-Tiron, Tudor and Johri, Sonika and Kerenidis, Iordanis and Nguyen, Jason and Pisenti, Neal and Prakash, Anupam and Sosnova, Ksenia and Wright, Ken and Zeng, William},
0 commit comments