EDIT: This answer is specifically from the perspective of very computationally oriented fields like theoretical plasma physics.
Most physicists can program, and in fact many are rather good programmers. It would be difficult to work in modern physics without being able to program. Unfortunately, many are also not terribly good programmers (I've read many a fortran code where goto was the primary method of flow control).
Having faster algorithms is always desirable, and hence algorithm analysis is useful. However, in many cases, the algorithm speed is not the limiting factor, so it isn't as useful as one might hope. More on that later.
One thing that I did in high school at a lab was essentially develop GUIs for existing programs. In theoretical plasma physics, there are a large number of codes that one runs to get some idea of what is going on in the reactor. Developing a GUI for this isn't as trivial as you might think; integrating parameter input, data visualisation, and connecting the codes in a nice way actually requires some knowledge of what's going on physically. This is more directed toward programmers than computer scientists, but it should still be useful.
Another area in which computational physics will need to go is in the direction of data-driven theories. Computer scientists know this better as machine learning. I'll just give you an example of a project I did, again in plasma physics. When calculating the turbulent transport for stellarators, the gold standard are so-called gyrokinetic simulations. These can go for 100 million CPU hours or more and generate huge amounts of data. My advisor (I was an intern at the time) suggested we explore the output of the files with neural networks. The idea was to train a neural network using as many gyrokinetic simulations as possible, and then see what it could do. We expected it probably wouldn't be able to do much of anything.
All the existing neural network packages, both commercial and free, were not sufficient for what we needed. There are built-in symmetries and approximate symmetries to the system, which are often non-obvious. Translating this into a way for a neural network to work wasn't easy. I ended up writing the code entirely myself, just slapping in as much of the physics as I could. It worked surprisingly well, and both my advisor and I thought this would be a very interesting direction in the future. Unfortunately, going beyond that was beyond my programming ability, and would probably require an expert in neural networks who knew a lot of plasma physics.
I don't expect one could manufacture a neural network code that would be useful to a broad area of disciplines. If there were a way to build symmetries into the code that the network would have to follow, that would be extremely useful for data-driven theory. I'd guess though that each one would probably have to be manufactured individually. This is an area in which theoretical (computational) physicists and computer scientists can and should probably collaborate on more. Neural networks obviously aren't the only thing either; I'd imagine that in fields like computational plasma physics, data driven theory would see a huge boom if we could use machine-learning with some of the physics built in.
I should probably add that what I was attempting to do was not, strictly speaking, data-driven theory, but rather simulation-driven theory. True data-driven theory would use experimental data, but this is by far the costlier option (given that each configuration corresponds to building a 1 billion USD+ stellarator). It was essentially a proof-of-concept project.
As for algorithm speed, in the case of plasma physics the limiting factor isn't necessarily being able to do the simulations. Even the most costly simulations we're interested in can be done reasonably on supercomputers today. So-called "full" simulations would require some $10^{30}$ more computation, which is unlikely to ever be feasible. The region inbetween has so-far proven to be rather chaotic, and it doesn't seem that the answers improve a huge amount by just randomly throwing more gridpoints at the problem. We need to first understand what is happening on the small scales, and then we can apply this. There are a number of techniques to do such computations, such as the aforementioned gyrokinetic simulations, but these are essentially just our best guess and only approximately match with experiment.
In a stellarator, the turbulent transport depends critically on the geometry of the reactor geometry, and as such there is essentially an infinite-dimensional parameter space to explore. At least to study this parameter space perturbatively, the best direction seems to be hybridized data/simulation driven theory development using machine learning. Having faster codes would help, but it isn't clear that they'd get us fundamentally to the right physics; rather, the problem seems to be that we're not sure how to develop such algorithms to get what we want from them. Admittedly, this was several years ago, and I've not kept up with the literature, and there were only a few people pursuing this direction, so I don't know if it's still open.
This post has been migrated from (A51.SE)