Quantcast
  • Register
PhysicsOverflow is a next-generation academic platform for physicists and astronomers, including a community peer review system and a postgraduate-level discussion forum analogous to MathOverflow.

Welcome to PhysicsOverflow! PhysicsOverflow is an open platform for community peer review and graduate-level Physics discussion.

Please help promote PhysicsOverflow ads elsewhere if you like it.

News

PO is now at the Physics Department of Bielefeld University!

New printer friendly PO pages!

Migration to Bielefeld University was successful!

Please vote for this year's PhysicsOverflow ads!

Please do help out in categorising submissions. Submit a paper to PhysicsOverflow!

... see more

Tools for paper authors

Submit paper
Claim Paper Authorship

Tools for SE users

Search User
Reclaim SE Account
Request Account Merger
Nativise imported posts
Claim post (deleted users)
Import SE post

Users whose questions have been imported from Physics Stack Exchange, Theoretical Physics Stack Exchange, or any other Stack Exchange site are kindly requested to reclaim their account and not to register as a new user.

Public \(\beta\) tools

Report a bug with a feature
Request a new functionality
404 page design
Send feedback

Attributions

(propose a free ad)

Site Statistics

205 submissions , 163 unreviewed
5,047 questions , 2,200 unanswered
5,345 answers , 22,709 comments
1,470 users with positive rep
816 active unimported users
More ...

  What do theoretical physicists need from computer scientists?

+ 13 like - 0 dislike
3643 views

I recently co-authored a paper (not online yet unfortunately) with some chemists that essentially provided answers to the question, "What do chemists need from computer scientists?" This included the solution of theoretical problems, like combinatorial enumeration and the sampling of certain classes of graphs; and practical programming problems, like open-source implementations of algorithms that are currently only implemented in expensive software packages.

This motivates me to ask: what about this field? Are there theoretical issues of combinatorics, algorithm analysis, that physics needs a theoretical computer scientist to solve? Or how about creation of practical tools that would allow a theoretical physicist to do a better job: "If only I had a program that would solve this type of problem for me!"

Intended as community wiki.

This post has been migrated from (A51.SE)
asked Sep 20, 2011 in Theoretical Physics by Aaron Sterling (130 points) [ no revision ]
Soft questions are not the best ones for the closed beta phase - see e.g. http://theoreticalphysics.stackexchange.com/questions/how-to-ask-beta. I think we will go much better (both for your question and the TP.SE community) if you delay it a bit.

This post has been migrated from (A51.SE)
@PiotrMigdal: I respectfully disagree. The type of question to avoid is one the encourages soft *answers*. I am requesting concrete research-level answers. That said, I am happy to stop commenting, etc., on this question until public beta. I don't wish to cause a problem, that is for sure.

This post has been migrated from (A51.SE)
Don't get me wrong. Soft does not mean easy - it is just a question for which there is no one objective solution. And, before the TP.SE community is fully built, it may tempt non-researchers to answer. (I guess you will agree with me that your question is of ones easy to answer in _any_ way, but difficult to answer in a _good_ way.)

This post has been migrated from (A51.SE)
I think the question is still too soft in the sense that it asks about very different topics. Answers could be about office solutions like small scale document management systems (like the one behing the arXiv), computational physics and numerical mathematics (e.g. open source libraries for fast sparse linear algebra not in FORTRAN), visualization, gui design, web page design, algorithm development and analysis (which can be and is done by people who have never written a program that compiled in their entire carreer), high performance computing and data analysis (like for the LHC) etc. etc.

This post has been migrated from (A51.SE)
@TimvanBeek: I am not sure what your objection is. Everything you mention is something an expert might encounter when trying to perform a research-level task. That is well within the criteria provided in the link Piotr Migdal posted. In any event, I can't delete the question without also deleting the answers, so I'm not sure what I can do, except not ask another of this type.

This post has been migrated from (A51.SE)
@Aaron Sterling: My suggestion is to have a stronger focus even in soft questions in order to adress one area of expertise, not many. Office solutions (word processing etc.) are a whole different topic than numerical algorithms, with people in one area rarely ever talking to people in the other area. (I certainly don't suggest to delete the question, or any answers, I think it is a very good example for a soft question for this private beta phase.)

This post has been migrated from (A51.SE)

5 Answers

+ 15 like - 0 dislike

I think perhaps some of the other answers are taking computer science to be synonymous with computation. I guess that this is perhaps not what you mean, but rather theoretical computer science. There is obviously a huge overlap with quantum information processing of which I think you are already well aware, so I will ignore that.

Much of physics (including quantum physics) is continuous, so the type of mathematics used in tends more towards continuous maths (solving PDEs, finding geodesics, etc.) as compared to the discrete structures studied in theoretical computer science. As such, there isn't so much of an overlap. Statistical mechanics tends to be more concerned with discrete structures, so there is more of an overlap there.

One huge area of overlap is actually in terms of computational physics, where people are concerned with computing certain properties of physical systems. In particular, simulating physical systems is a huge area of research, and there is a lot of focus on finding efficient algorithms for simulation of physical systems. In particular, finding quantum ground states and simulating quantum dynamics are the ones I have most direct experience of. There has been quite a lot of progress both in terms of proving hardness results (for example Scott Aarosnson's recent paper on the hardness of simulating linear optics, the QMA-completeness of finding quantum ground states even of quite restricted systems, simulating commuting operators etc.) as well as efficient algorithms (for example match gates, or the Gottesman-Knill theorem).

This post has been migrated from (A51.SE)
answered Sep 20, 2011 by Joe Fitzsimons (3,575 points) [ no revision ]
The mainstream models are discrete but there are other models that deal with continuous. A good old example is [here](http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.pl/1235422916). Also there is an interesting interaction on questions related to random k-SAT.

This post has been migrated from (A51.SE)
@Kaveh: Yes, I know. I simply meant at a high level the mathematics tends to be different.

This post has been migrated from (A51.SE)
While I certainly like this answer, I don't find any indication in the question that Aaron is intending for it to be mostly directed towards the area of theoretical computer science; he even asks about "creation of practical tools". Of course the questioner is a theoretical computer scientist, but from what I can tell the question is equally about all branches of computer science.

This post has been migrated from (A51.SE)
@Logan: He specifically says "Are there theoretical issues of combinatorics, algorithm analysis, that physics needs a theoretical computer scientist to solve?"

This post has been migrated from (A51.SE)
@Joe: Yes, but in the very next sentence he asks about practical problems, which was to me an indication that he was looking for answers from both the theoretical and applied points of view. Also, to clarify, I was not taking computation and computer science as synonymous, but I do have to admit I don't know much about what theoretical computer science consists of beyond, say, Hopcroft & Ullman. The theoretical computer scientists here do research in things like biological computing, which is, as far as I can tell, not what most people mean when they talk about theoretical computer science.

This post has been migrated from (A51.SE)
@LoganMaingi: Perhaps that quote coupled with my previous interactions with Aaron have coloured my interpretation of what he was asking.

This post has been migrated from (A51.SE)
+ 9 like - 0 dislike

EDIT: This answer is specifically from the perspective of very computationally oriented fields like theoretical plasma physics.

Most physicists can program, and in fact many are rather good programmers. It would be difficult to work in modern physics without being able to program. Unfortunately, many are also not terribly good programmers (I've read many a fortran code where goto was the primary method of flow control).

Having faster algorithms is always desirable, and hence algorithm analysis is useful. However, in many cases, the algorithm speed is not the limiting factor, so it isn't as useful as one might hope. More on that later.

One thing that I did in high school at a lab was essentially develop GUIs for existing programs. In theoretical plasma physics, there are a large number of codes that one runs to get some idea of what is going on in the reactor. Developing a GUI for this isn't as trivial as you might think; integrating parameter input, data visualisation, and connecting the codes in a nice way actually requires some knowledge of what's going on physically. This is more directed toward programmers than computer scientists, but it should still be useful.

Another area in which computational physics will need to go is in the direction of data-driven theories. Computer scientists know this better as machine learning. I'll just give you an example of a project I did, again in plasma physics. When calculating the turbulent transport for stellarators, the gold standard are so-called gyrokinetic simulations. These can go for 100 million CPU hours or more and generate huge amounts of data. My advisor (I was an intern at the time) suggested we explore the output of the files with neural networks. The idea was to train a neural network using as many gyrokinetic simulations as possible, and then see what it could do. We expected it probably wouldn't be able to do much of anything.

All the existing neural network packages, both commercial and free, were not sufficient for what we needed. There are built-in symmetries and approximate symmetries to the system, which are often non-obvious. Translating this into a way for a neural network to work wasn't easy. I ended up writing the code entirely myself, just slapping in as much of the physics as I could. It worked surprisingly well, and both my advisor and I thought this would be a very interesting direction in the future. Unfortunately, going beyond that was beyond my programming ability, and would probably require an expert in neural networks who knew a lot of plasma physics.

I don't expect one could manufacture a neural network code that would be useful to a broad area of disciplines. If there were a way to build symmetries into the code that the network would have to follow, that would be extremely useful for data-driven theory. I'd guess though that each one would probably have to be manufactured individually. This is an area in which theoretical (computational) physicists and computer scientists can and should probably collaborate on more. Neural networks obviously aren't the only thing either; I'd imagine that in fields like computational plasma physics, data driven theory would see a huge boom if we could use machine-learning with some of the physics built in.

I should probably add that what I was attempting to do was not, strictly speaking, data-driven theory, but rather simulation-driven theory. True data-driven theory would use experimental data, but this is by far the costlier option (given that each configuration corresponds to building a 1 billion USD+ stellarator). It was essentially a proof-of-concept project.

As for algorithm speed, in the case of plasma physics the limiting factor isn't necessarily being able to do the simulations. Even the most costly simulations we're interested in can be done reasonably on supercomputers today. So-called "full" simulations would require some $10^{30}$ more computation, which is unlikely to ever be feasible. The region inbetween has so-far proven to be rather chaotic, and it doesn't seem that the answers improve a huge amount by just randomly throwing more gridpoints at the problem. We need to first understand what is happening on the small scales, and then we can apply this. There are a number of techniques to do such computations, such as the aforementioned gyrokinetic simulations, but these are essentially just our best guess and only approximately match with experiment.

In a stellarator, the turbulent transport depends critically on the geometry of the reactor geometry, and as such there is essentially an infinite-dimensional parameter space to explore. At least to study this parameter space perturbatively, the best direction seems to be hybridized data/simulation driven theory development using machine learning. Having faster codes would help, but it isn't clear that they'd get us fundamentally to the right physics; rather, the problem seems to be that we're not sure how to develop such algorithms to get what we want from them. Admittedly, this was several years ago, and I've not kept up with the literature, and there were only a few people pursuing this direction, so I don't know if it's still open.

This post has been migrated from (A51.SE)
answered Sep 20, 2011 by Logan Maingi (210 points) [ no revision ]
Alright, I must admit that this doesn't exactly fit your comment on Michael's post, but I think it's a start.

This post has been migrated from (A51.SE)
I think you can generalize your answer to machine learning ideas and not just neural networks.

This post has been migrated from (A51.SE)
Since I've retracted my post, just to clearify what Logan Maingi meant by "comment on Michael's post": I posted an example with a very general need for computer science, but the question is for an explicit Problem/Example in TP that requires CS

This post has been migrated from (A51.SE)
I mentioned machine learning in a broader context, see for instance the last sentence. I can't claim to be knowledgeable of anything in the field except a little bit on neural networks, which is why the answer almost exclusively discusses them. I would agree that there is much beyond neural networks to be gained, but I don't know quite what it is.

This post has been migrated from (A51.SE)
I have to disagree. It is exactly the algorithms that run on supercomputers that you need to speed up. (I'm not saying that theoretical computer scientists can help, but saying "we can always just run things on faster computers" is definitely the wrong attitude.) At some point, I saw an analysis of the speed-up in solving linear programs over a period of some decades. There was a factor of a several million that came from faster machines. There was a significantly larger factor that came from improvement of algorithms. And people would *still* like to solve linear programs faster.

This post has been migrated from (A51.SE)
While I can't disagree that it would be nice to speed these up, considering the amount of work that was put into optimizing every aspect of the particular codes I worked with (by professional programmers as well as physicists), I'd be surprised if they could be sped up more than linearly. While a speed up of a million times would certainly increase the quality of the simulation, in this case the system is intrinsically chaotic and even a million times more computation wouldn't get you that far. In such complicated systems, data-driven theory seems to be the best way to approach it.

This post has been migrated from (A51.SE)
Also, let me add that it's great to see such a notable researcher on this site. If you could write up your own experiences in greater detail as an answer, I'm sure it would be highly appreciated by everyone here.

This post has been migrated from (A51.SE)
@Peter After reading my writing again, I can see what you disagreed with. I removed the second paragraph, which was saying that algorithm analysis is not useful. I meant this only in the very narrow context I was dealing with, and even then it wasn't well explained. I still think that faster algorithms wouldn't solve the problems posed here, but of course they would help.

This post has been migrated from (A51.SE)
@LoganMaingi: Finding a better algorithm for a problem goes way beyond optimizing code for a specific algorithm. An example of this is matrix multiplication: No matter how well you implement the multiplication you learned in school or college it takes $O(n^3)$. However there are better algorithms which give a scaling of $O(n^k)$ for $k<3$, which you would never get by trying to optimize the implementation of naive multiplication. See http://en.wikipedia.org/wiki/Matrix_multiplication#Algorithms_for_efficient_matrix_multiplication<font color="red">

This post has been migrated from (A51.SE)
I'm well aware of the problem of optimizing matrix multiplication, but I'm not convinced this is something that is specifically or especially wanted by physicists. Virtually everyone in the world who does any computations wants to multiply matricies faster. Of course if someone can find a faster algorithm for matrix multiplication, singular value decomposition, fourier transform, etc. then it would greatly benefit physicists, but that's fairly obvious. Also, at the time I posted this answer, Aaron was asking for specific examples; hence, I tried to stay fairly concrete with the problem.

This post has been migrated from (A51.SE)
@LoganMaingi: I wasn't talking about matrix multiplication. It was simply an example of the difference between a better algorithm and optimized code.

This post has been migrated from (A51.SE)
I'm afraid I don't understand what point you're trying to make, then. I agree that algorithms can be improved, but barring a rather drastic and unlikely advance it wouldn't seem that programmatic computation alone will solve the issues posed. Are you disagreeing with me here? Or are you disagreeing when I say that using machine learning techniques may help alleviate the issue? I'm finding myself agreeing with what you say, so I'm not sure what I said that you're disagreeing with. I suspect it may be a misunderstanding. Could you give a quote to which statement I made that you don't agree with?

This post has been migrated from (A51.SE)
I had in mind your comment to Peter where you said "While I can't disagree that it would be nice to speed these up, considering the amount of work that was put into optimizing every aspect of the particular codes I worked with (by professional programmers as well as physicists), I'd be surprised if they could be sped up more than linearly." It seemed to me that you were focused on optimizing your programme, while he was talking about better algorithms.

This post has been migrated from (A51.SE)
Alright, I now understand what you're talking about, and I can see that you may be right. However, if that's what he meant with that comment, I think it's off topic on this site and this question. The question is about what theoretical physicists want particularly, not what everyone who does computation wants in general. I'd be ecstatic for $O(n^2)$ matrix multiplication, but I don't think that's a reasonable thing to ask. It'd be like asking mathematicians to answer the Hodge conjecture. It's already well-publicized and people are working on it, but it's not an easy problem.

This post has been migrated from (A51.SE)
What I'd really like to see is if someone could come up with a list of commonly used algorithms in physics that aren't well known outside physics. That would be a post in the algorithm analysis area that I'd support. But I think such general things as "Improve your algorithms" or "Give us $O(n^2)$ matrix multiplication" aren't terribly good examples for this question. The former is somewhat irresponsible without having read the algorithm itself. How do you know that improving the algorithm will solve the problem better than machine learning? The latter is discussed in the previous comment.

This post has been migrated from (A51.SE)
I should add a caveat that I agree with Peter's original comment entirely, as I read it. I thought he was commenting the idea that "more powerful computers will always be available" is irresponsible. If I seem to support such a position, in fact I do not. That some of my colleges thought this way is part of why I moved out of computation into more mathematical areas. It makes algorithm analysis seem pointless, which it isn't. However, I also think that procedure and algorithm-based programming has its limits. Machine learning and data-driven theory are the next step when stuff hits a wall.

This post has been migrated from (A51.SE)
@LoganMaingi: I think you still miss my point. Saying "we want better simulation algorithms" is an entirely reasonable thing to ask for. As I said, I only used matrix multiplication as an example to illustrate the difference between better algorithms and faster code.

This post has been migrated from (A51.SE)
"What I'd really like to see is if someone could come up with a list of commonly used algorithms in physics that aren't well known outside physics." -- So would I! ... but, I'll wait until the public beta to ask that question. :-)

This post has been migrated from (A51.SE)
@Aaron: You know physicists have been using randomized algorithms since the 40s?

This post has been migrated from (A51.SE)
@Joe: I would say my "knowledge" of algorithms in physics is at the level where it is more likely to misguide than inform me.

This post has been migrated from (A51.SE)
+ 5 like - 0 dislike

Disclaimer: I am not very familiar with the numerical analysis literature, so it is quite possible that the problems I will mention are already solved.

Perhaps this is a more banal suggestion, but I think that better software for doing numerical differential (and in some cases, algebraic) geometry would be amazing. This probably seems 'obvious' and to many people, this might be already be a solved CS problem, but I'd rather illustrate this issue with an example.

I once had to consider some 10-dimensional metric $ds_{10} = ds_{S^5} + ds_{M}$ and solve an simplified eigenvalue problem: What is the lowest eigenvalue of the Laplace-Beltrami operator associated to $ds_{10}$. The physical relevance of the problem turned out to be that we could use a dipole-like approximation that only depends on the lowest non-zero eigenvalue. After spending a few weeks on analytically trying to find this eigenvalue, I decided to try to do this numerically. My advisor had convinced me that the numerical tools to do this would be out there and that it should be a straightforward problem. Unfortunately, three months later, I realized that there were few open source/commerical programs that could do this task and moreover, there were few algorithms developed to approximate slightly higher dimensional (non-linear) PDEs. I do believe that the numerical methods do exist (I've spoken to a few numerical analysts at Courant who said they believed that this problem was solved, but stopped short of providing me with references to algorithms).

I think that while theoretical physicists are often times trained to solve for things only analytically, it would be nice to gain some intuition from the numerical solutions. Moreover, there are a lot of algorithms from mathematicians like Robert Ghrist for computing important invariants like Morse Homology and I think it would be great if we could find the right balance between computer science and physics in order to implement such techniques. I have some background in CS and with parallel programming and I still feel that the two sides (CS, theoretical physics) are so disconnected that it is hard for such solutions to be implemented.

This post has been migrated from (A51.SE)
answered Oct 13, 2011 by Tarun Chitra (170 points) [ no revision ]
Thanks very much. This is exactly the kind of thing I was looking for.

This post has been migrated from (A51.SE)
+ 4 like - 0 dislike

My old advisor, a physicist working in cs, wrote a paper showing that spacetimes are categorically equivalent to interval domains. His name is Panangaden. Domains are useful in cs as a semantics for programming languages. In causal approaches to quantum gravity (which I take as a general place to have unique forms of novel, relevant physics) time looks way more like logical clocks, which we see in the analysis of distributed and concurrent computer system analysis. Some aspects of cs are useful for the newest stuff being done in physics like the categorical quantum mechanics.

This post has been migrated from (A51.SE)
answered Oct 9, 2011 by ben1 (40 points) [ no revision ]
Prakash is something of a special case, as he has a significant background in physics.

This post has been migrated from (A51.SE)
+ 0 like - 4 dislike

Mathematics is to physics what masturbation is to sex - Richard Feynman. This means physics are about real-world interactions versus plain abstract concepts. The same can be said of science versus engineering: science develops abstract tools that engineers use to represent real-world computation models.

Computer science is not a science but an engineering discipline. It's not even about computers; it's about modelling computations. The computer is the medium binding thoughts to physics in the form of programs.

The most obvious skill a physicist would benefit from computer science is the knowledge of algorithms and data structures. Algorithms are general descriptions of how to carry on a specific task. Those tasks can be accomplished using many algorithms. It's all a matter of solving the task the most concise, simple and efficient way. Knowledge of algorithms helps select the right algorithm for the right data structure. Data structures are ways of storing data. Examples include lists, arrays, numbers, strings, vectors, matrices, even functions (if the language used allows functions to be used as data to be passed around). Category theory is about math, physics, data types etc. It's all linked together. I suggest physicists learn Haskell to make the link themselves.

In sum, being better at solving real-word problems is what skill computer science has to give. Programming will make any scientist solve problems at the speed of an electron while making it easier to review thoughts and models.

Programming is hands-on so what are you waiting for? Get your fingers running!

This post has been migrated from (A51.SE)
answered Jan 20, 2012 by Samuel Duclos (-40 points) [ no revision ]
I've downvoted this because I consider it unhelpful, and not a real answer.

This post has been migrated from (A51.SE)

Your answer

Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead.
To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL.
Please consult the FAQ for as to how to format your post.
This is the answer box; if you want to write a comment instead, please use the 'add comment' button.
Live preview (may slow down editor)   Preview
Your name to display (optional):
Privacy: Your email address will only be used for sending these notifications.
Anti-spam verification:
If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:
p$\hbar$ysicsO$\varnothing$erflow
Then drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds).
Please complete the anti-spam verification




user contributions licensed under cc by-sa 3.0 with attribution required

Your rights
...