Talking mostly about giant planets here, one reason they are so difficult to detect directly is the matter of contrast. The host star (parasitic connotations!) is expected to be roughly 109 times brighter than the planet. That's (because this kind of explicating is easier and more fun than contributing new trickier information) a difference of one thousand million or (in US-now-standard use) a whole billion. This is in visible light. In infrared (well, wavelengths from 20 - 100 micrometres) the difference is ~104. or about ten thousand times. This is because main sequence stars tend to have their peak luminosity in the visible part of the electromagnetic spectrum and decline fairly rapidly in the infrared whereas giant planets are expected to be brightest (they radiate in infrared from gravitational collapse, at least until they are done [Jupiter still does])
The paper I am looking at right now (Detection of Extrasolar Giant Planets by Marcy & Butler, 1998) uses the example of a solar type star with a Jupiter-equivalent planetary companion, seen from a distance of 10 parsecs. The star would have a visual magnitude of 5 (that would be an approximation, Sol's absolute magnitude is actually 4.8) while 'Jupiter' would have a magnitude of 27*. In this hypothetical system 'Jupiter' would have an angular separation from the star of half an arcsecond. If diffraction were the only limiting factor all we would need to resolve it is a scope with at least a 0.4 metre aperture**. Obviously since there are plans to build rather expensive scopes (especially interferometers) in space something more is required. And since it looks like I may be running out of time tonight I think I will stop here for now.
*For those who don't know, in the magnitude system astronomers us the lower the number the brighter the object, extending into negative numbers if necessary. Each five steps of magnitude represent a change in brightness of 100 times, so a magnitude 1 object is 100 times as bright as a magnitude 6 one. Or, each step is brighter/dimmer by the fifth root of one hundred (~2.512 - the scale is logarithmic, see?)
**D > 0.4 (lambda/1mu)(d/10pc)(5AU/r), with lambda being the wavelength observations happen at, d the distance to the star and r is their orbital seperation.
The paper I am looking at right now (Detection of Extrasolar Giant Planets by Marcy & Butler, 1998) uses the example of a solar type star with a Jupiter-equivalent planetary companion, seen from a distance of 10 parsecs. The star would have a visual magnitude of 5 (that would be an approximation, Sol's absolute magnitude is actually 4.8) while 'Jupiter' would have a magnitude of 27*. In this hypothetical system 'Jupiter' would have an angular separation from the star of half an arcsecond. If diffraction were the only limiting factor all we would need to resolve it is a scope with at least a 0.4 metre aperture**. Obviously since there are plans to build rather expensive scopes (especially interferometers) in space something more is required. And since it looks like I may be running out of time tonight I think I will stop here for now.
*For those who don't know, in the magnitude system astronomers us the lower the number the brighter the object, extending into negative numbers if necessary. Each five steps of magnitude represent a change in brightness of 100 times, so a magnitude 1 object is 100 times as bright as a magnitude 6 one. Or, each step is brighter/dimmer by the fifth root of one hundred (~2.512 - the scale is logarithmic, see?)
**D > 0.4 (lambda/1mu)(d/10pc)(5AU/r), with lambda being the wavelength observations happen at, d the distance to the star and r is their orbital seperation.