Project and how it's going: Let's talk about rocket launches and NET dates! Real programs usually don't really go on schedule Some issues people are having: Large programs are complicated. CASTER is no exception! There are problems CASTER solves (like zooming and moving) Layers are about 7.5 pixels thick Grading: Have you all worked hard on this? Since you're working on research with me: Mitochondria tracing New image volumes R15 grant getting and such Getting this into a tool that somebody could actually use My new project: RTX bioinformatics Neural Networks on a GPU: Let's review neural networks first Reminder: Synapses and input weighting Grid? Not in real life! Backpropegation and training "deep" learning Now that we know: Nodes: How many? Nodes require large numbers of floating point calculations Each layer can be run with all nodes in parallel A tensor core can do 64 FMA operations per clock cycle 640 tensor cores on V100, less on RTX (288 on 2070) Who is nVidia's competitor? Vega can perform 64-bit floating point and half 32-bit speed What does a V100 cost? What would it cost to beat one with Intel processors? https://software.intel.com/en-us/articles/intel-cpu-outperforms-nvidia-gpu-on-resnet-50-deep-learning-inference nVidia response: https://blogs.nvidia.com/blog/2019/05/21/intel-inference-nvidia-gpus/ Ok, trying something different with the GPU this time! Can we disappear some vesicles?