|
28 | 28 | "cell_type": "markdown", |
29 | 29 | "metadata": {}, |
30 | 30 | "source": [ |
31 | | - "This minimal tutorial demonstrates how to use the torch frontend for `S2FFT` to compute spherical harmonic transforms. Though `S2FFT` is primarily designed for JAX, this torch functionality is fully unit tested (including gradients) and can be used straightforwardly as a learnable layer within existing models." |
| 31 | + "This minimal tutorial demonstrates how to use the torch frontend for `S2FFT` to compute spherical harmonic transforms. Though `S2FFT` is primarily designed for JAX, this torch functionality is fully unit tested (including gradients) and can be used straightforwardly as a learnable layer within existing models. As the torch functions wrap the JAX implementations we need to configure JAX to use 64-bit precision floating point types by default to ensure sufficient precision for the transforms - `S2FFT` will emit a warning if this has not been done." |
32 | 32 | ] |
33 | 33 | }, |
34 | 34 | { |
35 | 35 | "cell_type": "code", |
36 | 36 | "execution_count": 2, |
37 | 37 | "metadata": {}, |
38 | | - "outputs": [ |
39 | | - { |
40 | | - "name": "stderr", |
41 | | - "output_type": "stream", |
42 | | - "text": [ |
43 | | - "JAX is not using 64-bit precision. This will dramatically affect numerical precision at even moderate L.\n" |
44 | | - ] |
45 | | - } |
46 | | - ], |
| 38 | + "outputs": [], |
47 | 39 | "source": [ |
| 40 | + "import jax\n", |
| 41 | + "jax.config.update(\"jax_enable_x64\", True)\n", |
48 | 42 | "import torch \n", |
49 | | - "import numpy as np \n", |
50 | | - "from s2fft.precompute_transforms.spherical import inverse, forward\n", |
51 | | - "from s2fft.precompute_transforms.construct import spin_spherical_kernel\n", |
| 43 | + "import numpy as np\n", |
| 44 | + "from s2fft.transforms.spherical import inverse, forward\n", |
| 45 | + "from s2fft.precompute_transforms.spherical import (\n", |
| 46 | + " inverse as precompute_inverse, forward as precompute_forward\n", |
| 47 | + ")\n", |
| 48 | + "from s2fft.precompute_transforms.construct import spin_spherical_kernel_torch\n", |
52 | 49 | "from s2fft.utils import signal_generator" |
53 | 50 | ] |
54 | 51 | }, |
|
65 | 62 | "metadata": {}, |
66 | 63 | "outputs": [], |
67 | 64 | "source": [ |
68 | | - "L = 64 # Spherical harmonic bandlimit\n", |
69 | | - "rng = np.random.default_rng(1234951510) # Random seed for signal generator\n", |
70 | | - "flm = signal_generator.generate_flm(rng, L, using_torch=True) # Random set of spherical harmonic coefficients" |
| 65 | + "L = 64 \n", |
| 66 | + "rng = np.random.default_rng(1234951510)\n", |
| 67 | + "flm = torch.from_numpy(signal_generator.generate_flm(rng, L))" |
71 | 68 | ] |
72 | 69 | }, |
73 | 70 | { |
74 | 71 | "cell_type": "markdown", |
75 | 72 | "metadata": {}, |
76 | 73 | "source": [ |
77 | | - "For the fully precompute transform we must also generate the precompute kernels which we store as a torch tensors." |
| 74 | + "Now lets calculate the signal on the sphere by applying the inverse spherical harmonic transform" |
78 | 75 | ] |
79 | 76 | }, |
80 | 77 | { |
81 | 78 | "cell_type": "code", |
82 | 79 | "execution_count": 4, |
83 | 80 | "metadata": {}, |
84 | | - "outputs": [], |
| 81 | + "outputs": [ |
| 82 | + { |
| 83 | + "name": "stderr", |
| 84 | + "output_type": "stream", |
| 85 | + "text": [ |
| 86 | + "An NVIDIA GPU may be present on this machine, but a CUDA-enabled jaxlib is not installed. Falling back to cpu.\n" |
| 87 | + ] |
| 88 | + } |
| 89 | + ], |
85 | 90 | "source": [ |
86 | | - "inverse_kernel = spin_spherical_kernel(L, using_torch=True, forward=False) \n", |
87 | | - "forward_kernel = spin_spherical_kernel(L, using_torch=True, forward=True) " |
| 91 | + "f = inverse(flm, L, method=\"torch\")" |
88 | 92 | ] |
89 | 93 | }, |
90 | 94 | { |
91 | 95 | "cell_type": "markdown", |
92 | 96 | "metadata": {}, |
93 | 97 | "source": [ |
94 | | - "Now lets calculate the signal on the sphere by applying the inverse spherical harmonic transform" |
| 98 | + "To calculate the corresponding spherical harmonic representation execute" |
95 | 99 | ] |
96 | 100 | }, |
97 | 101 | { |
|
100 | 104 | "metadata": {}, |
101 | 105 | "outputs": [], |
102 | 106 | "source": [ |
103 | | - "f = inverse(flm, L, 0, inverse_kernel, method=\"torch\")" |
| 107 | + "flm_check = forward(f, L, method=\"torch\")" |
104 | 108 | ] |
105 | 109 | }, |
106 | 110 | { |
107 | 111 | "cell_type": "markdown", |
108 | 112 | "metadata": {}, |
109 | 113 | "source": [ |
110 | | - "To calculate the corresponding spherical harmonic representation execute" |
| 114 | + "Finally, lets check the error on the round trip is as expected for 64 bit machine precision floating point arithmetic" |
111 | 115 | ] |
112 | 116 | }, |
113 | 117 | { |
114 | 118 | "cell_type": "code", |
115 | 119 | "execution_count": 6, |
116 | 120 | "metadata": {}, |
117 | | - "outputs": [], |
| 121 | + "outputs": [ |
| 122 | + { |
| 123 | + "name": "stdout", |
| 124 | + "output_type": "stream", |
| 125 | + "text": [ |
| 126 | + "Mean absolute error = 2.8915048238993476e-14\n" |
| 127 | + ] |
| 128 | + } |
| 129 | + ], |
118 | 130 | "source": [ |
119 | | - "flm_check = forward(f, L, 0, forward_kernel, method=\"torch\")" |
| 131 | + "print(f\"Mean absolute error = {np.nanmean(np.abs(flm_check - flm))}\")" |
120 | 132 | ] |
121 | 133 | }, |
122 | 134 | { |
123 | 135 | "cell_type": "markdown", |
124 | 136 | "metadata": {}, |
125 | 137 | "source": [ |
126 | | - "Finally, lets check the error on the roundtrip is at 64bit machine precision" |
| 138 | + "For the fully precompute transform we must also generate the precompute kernels which we store as a torch tensors." |
127 | 139 | ] |
128 | 140 | }, |
129 | 141 | { |
130 | 142 | "cell_type": "code", |
131 | 143 | "execution_count": 7, |
132 | 144 | "metadata": {}, |
| 145 | + "outputs": [], |
| 146 | + "source": [ |
| 147 | + "inverse_kernel = spin_spherical_kernel_torch(L, forward=False) \n", |
| 148 | + "forward_kernel = spin_spherical_kernel_torch(L, forward=True) " |
| 149 | + ] |
| 150 | + }, |
| 151 | + { |
| 152 | + "cell_type": "markdown", |
| 153 | + "metadata": {}, |
| 154 | + "source": [ |
| 155 | + "We then pass the kernels as additional arguments to the transform functions" |
| 156 | + ] |
| 157 | + }, |
| 158 | + { |
| 159 | + "cell_type": "code", |
| 160 | + "execution_count": null, |
| 161 | + "metadata": {}, |
| 162 | + "outputs": [ |
| 163 | + { |
| 164 | + "ename": "NameError", |
| 165 | + "evalue": "name 'orward_kernel' is not defined", |
| 166 | + "output_type": "error", |
| 167 | + "traceback": [ |
| 168 | + "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m", |
| 169 | + "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)", |
| 170 | + "Cell \u001b[0;32mIn[8], line 2\u001b[0m\n\u001b[1;32m 1\u001b[0m precompute_f \u001b[38;5;241m=\u001b[39m precompute_inverse(flm, L, kernel\u001b[38;5;241m=\u001b[39minverse_kernel, method\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtorch\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m----> 2\u001b[0m precompute_flm_check \u001b[38;5;241m=\u001b[39m precompute_forward(f, L, kernel\u001b[38;5;241m=\u001b[39m\u001b[43morward_kernel\u001b[49m, method\u001b[38;5;241m=\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtorch\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n", |
| 171 | + "\u001b[0;31mNameError\u001b[0m: name 'orward_kernel' is not defined" |
| 172 | + ] |
| 173 | + } |
| 174 | + ], |
| 175 | + "source": [ |
| 176 | + "precompute_f = precompute_inverse(flm, L, kernel=inverse_kernel, method=\"torch\")\n", |
| 177 | + "precompute_flm_check = precompute_forward(f, L, kernel=forward_kernel, method=\"torch\")" |
| 178 | + ] |
| 179 | + }, |
| 180 | + { |
| 181 | + "cell_type": "markdown", |
| 182 | + "metadata": {}, |
| 183 | + "source": [ |
| 184 | + "Again, we check the error on the round trip is as expected" |
| 185 | + ] |
| 186 | + }, |
| 187 | + { |
| 188 | + "cell_type": "code", |
| 189 | + "execution_count": null, |
| 190 | + "metadata": {}, |
133 | 191 | "outputs": [ |
134 | 192 | { |
135 | 193 | "name": "stdout", |
136 | 194 | "output_type": "stream", |
137 | 195 | "text": [ |
138 | | - "Mean absolute error = 1.1866908936078849e-14\n" |
| 196 | + "Mean absolute error = 2.8472981477378884e-14\n" |
139 | 197 | ] |
140 | 198 | } |
141 | 199 | ], |
142 | 200 | "source": [ |
143 | | - "print(f\"Mean absolute error = {np.nanmean(np.abs(flm_check - flm))}\")" |
| 201 | + "print(f\"Mean absolute error = {np.nanmean(np.abs(precompute_flm_check - flm))}\")" |
144 | 202 | ] |
145 | 203 | } |
146 | 204 | ], |
147 | 205 | "metadata": { |
148 | 206 | "kernelspec": { |
149 | | - "display_name": "Python 3.10.4 ('s2fft')", |
| 207 | + "display_name": "s2fft", |
150 | 208 | "language": "python", |
151 | 209 | "name": "python3" |
152 | 210 | }, |
|
160 | 218 | "name": "python", |
161 | 219 | "nbconvert_exporter": "python", |
162 | 220 | "pygments_lexer": "ipython3", |
163 | | - "version": "3.10.0" |
| 221 | + "version": "3.11.10" |
164 | 222 | }, |
165 | | - "orig_nbformat": 4, |
166 | | - "vscode": { |
167 | | - "interpreter": { |
168 | | - "hash": "3425e24474cbe920550266ea26b478634978cc419579f9dbcf479231067df6a3" |
169 | | - } |
170 | | - } |
| 223 | + "orig_nbformat": 4 |
171 | 224 | }, |
172 | 225 | "nbformat": 4, |
173 | 226 | "nbformat_minor": 2 |
|
0 commit comments