おそらくだけどさ、私たちは人間の希望が潰えた時代に生きているんだと思う。将来への期待、もっといい生活ができるという見込み、科学への憧れ、そういうつい最近まで当たり前だったものが、全部ダメになった時代なんだ。
大学入試の英文和訳にありそうな文が脳内に湧いてきたので、フリーレンっぽい口調に訳したら脳内再生余裕でしたw
読んだ
Acta Cryst. A "Optimal estimated standard uncertainties of reflection intensities for kinematical refinement from 3D electron diffraction data"
Very interesting. I like the caution about "observed" reflections.- Why is GoF smallest with the model 2? → reply
- Why do inflated sigmas make "the model less reliable"? Does atomic model refinement use only "observed" reflections?
- This paper is for kinematical refinement but how are sigmas corrected for dynamical refinement?
- I understand ESDs in least squares atomic model refinement are calculated by error propagation. However, it is not clear whether deviations of observed intensities are normally distributed from the true value; i.e. there might be systematic errors and/or the distribution might be skewed, even when the normal probability plot looks reasonable. Are ESDs meaningful for kinematically refined MicroED structures? → reply
- What about ESDs of traditional X-ray structures? Certainly they are more precise, but again they are merely a lower bound. Coming from macromolecular crystallography, I am curious if they are really worth being mandated for reporting. The reporting requirement seems to preclude the use of a maximum likelihood target that marginalizes over phase errors. The ML target is common in macromolecular crystallography and seems more robust than least squares for poor initial phases and incomplete/anisotropic data.
- Can dials.scale implement this? → reply
The paper applies outlier rejection. Indeed the final section of the paper mentions that "the differences between the models would become smaller" with higher multiplicity. However, considering that the three-parameter error model was superior to naive population sigma even in a massive-multiplicity SFX dataset (see Aaron's paper), I feel this is worth exploring.
Preprint "PhAI: A deep learning approach to solve the crystallographic phase problem"
Impressive and thank you very much for releasing both training codes and model weights.- I wonder why the neural network was implemented in the reciprocal space. 3D convolutions seem more "meaningful" (provide good inductive bias) in the real space. One could replace traditional charge flipping, solvent flattening, histogram matching etc in dual space iterative algorithms with a density modifying neural network. Was such an approach tested (and failed)?
Actually I myself tried it 5 years ago for proteins and failed. I was working on phase improvement: given a very poor initial phase in P1 (e.g. after molecular replacement from a distant model), can a denoising network improve the phase? Because it was in P1 and the initial phase "fixed" the origin, the origin problem was not relevant here. But it was just a side project; I didn't spend much time on it and neural networks dealing with 3D volumes were less common at the time. With latest know-hows, real space approaches might also work (but I abandoned the project years ago). → reply - Randomly omitting 15 % of reflections is different from more realistic cases of missing wedges (e.g. insufficient rotation in MicroED). Did the authors test such situations? Can the network be trained on such cases? → reply
- "42,000,000 structures" for training are very large. I wonder what fraction the training set represents among all possible crystal structures below the given unit cell volume? Did it contain structures very similar to the test case? i.e. naphthalene is a very common motif. What is the network performance when trained on a smaller set? I asked this because in the real space solution-like features are common regardless of where they are. We can expect a convolutional filter trained on a 10 Å cube applies to a larger volume. This is not so obvious in the reciprocal space. So I wondered how the required network size and the number of training data scale as you increase the unit cell volume you can solve. → reply 1, 2
- Finally, it would be very interesting to see if this approach can be extended to non-centrosymmetric space groups, where phases are arbitrary. → reply 1, 2
- I wonder why the neural network was implemented in the reciprocal space. 3D convolutions seem more "meaningful" (provide good inductive bias) in the real space. One could replace traditional charge flipping, solvent flattening, histogram matching etc in dual space iterative algorithms with a density modifying neural network. Was such an approach tested (and failed)?
Science "Entanglement with tweezed molecules"
うーん、分かったような分からないような……。最初からエンタングルした状態で発生する photon と違って、後からエンタングルした状態に揃えるのってどういう仕組みなんだろう。
アニメ
「アンデッド・アンラック」10 話まで見た。
本作、最初の数話は面白かったが、円卓に加入してからが微妙。ゾンビの話が長かったし重かった。こういうのは 1 話で片付くような敵のエピソードをいくつかやって、視聴者がキャラクタや世界を十分知って & 好きになってからやるべきだと思う。「葬送のフリーレン」14 話を見た。
頭を撫でるシーンは好きだが、フェルンがどんどん面倒な性格になっていくのは好きじゃないなあ。 そういう面倒なところも含めて人間的な感情をフリーレンが理解していくのが本作のテーマであることは分かるけど、超越的な視点も失わないでほしい。