DMA
In embedded systems, or more generally in computer systems, DMA stands for Direct Memory Access.
Direct memory access is the ability of a peripheral, such as an Ethernet or USB controller to read or write data directly to/from memory, without involving the CPU.
Overview
DMA requires a DMA controller, which can be a part of the peripheral.
Benefits
A DMA can usually perform the transfer faster than the CPU, as it is highly optimized for this one purpose of transferring data to or from a peripheral. Another benefit is that the CPU does not need to be involved, so the CPU can continue to do what it is doing, instead of having to jump to an ISR (interrupt service routine) in which the CPU then takes care of the data transfer.
This also improves reaction times: A CPU needs to perform a context save when entering an ISR, and it may also be busy performing other tasks (or servicing another interrupt at same or higher priority). So using a DMA reduces the latency and basically eliminates the risk of buffer overflows (data not read or written in the required time). In addition, since a DMA is more efficient, it reduces power consumption. So: Many benefits.
Downsides
Using the DMA makes the program somewhat more complicated, as the DMA controller needs to be properly configured.
Most high end systems also use caches, which are bypassed by the DMA. So when the CPU wants to access data written by the DMA, it needs to make sure the area affected by the DMA transfer is invalidated in the cache first, in order to avoid reading "old" data from the cache instead of the freshly transferred data.
Similarly, when writing to main memory: The memory area must be "cleaned" (flushed) to ensure the correct data is actually in main memory. Alternatively, an uncached memory area can be used for DMA transfers. When using cached areas, it is also important to make sure that the transfer starts at a cache boundary. So: More performance, less power, but more complexity.