blob: 98bf7ac29aad8fff65e88bc6b1e6c2a6fc3a5033 [file] [log] [blame]
Arthur Kepnera75b0a22008-04-29 01:00:31 -07001 DMA attributes
2 ==============
3
4This document describes the semantics of the DMA attributes that are
Krzysztof Kozlowski00085f12016-08-03 13:46:00 -07005defined in linux/dma-mapping.h.
Arthur Kepnera75b0a22008-04-29 01:00:31 -07006
7DMA_ATTR_WRITE_BARRIER
8----------------------
9
10DMA_ATTR_WRITE_BARRIER is a (write) barrier attribute for DMA. DMA
11to a memory region with the DMA_ATTR_WRITE_BARRIER attribute forces
12all pending DMA writes to complete, and thus provides a mechanism to
13strictly order DMA from a device across all intervening busses and
14bridges. This barrier is not specific to a particular type of
15interconnect, it applies to the system as a whole, and so its
Xishi Qiubf038222013-08-30 17:39:28 +080016implementation must account for the idiosyncrasies of the system all
Arthur Kepnera75b0a22008-04-29 01:00:31 -070017the way from the DMA device to memory.
18
19As an example of a situation where DMA_ATTR_WRITE_BARRIER would be
20useful, suppose that a device does a DMA write to indicate that data is
21ready and available in memory. The DMA of the "completion indication"
22could race with data DMA. Mapping the memory used for completion
23indications with DMA_ATTR_WRITE_BARRIER would prevent the race.
24
Mark Nelson1ed6af72008-07-18 23:03:34 +100025DMA_ATTR_WEAK_ORDERING
26----------------------
27
28DMA_ATTR_WEAK_ORDERING specifies that reads and writes to the mapping
29may be weakly ordered, that is that reads and writes may pass each other.
30
31Since it is optional for platforms to implement DMA_ATTR_WEAK_ORDERING,
32those that do not will simply ignore the attribute and exhibit default
33behavior.
Marek Szyprowski8a413432011-12-23 09:30:47 +010034
35DMA_ATTR_WRITE_COMBINE
36----------------------
37
38DMA_ATTR_WRITE_COMBINE specifies that writes to the mapping may be
39buffered to improve performance.
40
41Since it is optional for platforms to implement DMA_ATTR_WRITE_COMBINE,
42those that do not will simply ignore the attribute and exhibit default
43behavior.
Marek Szyprowski64d70fe2012-03-28 07:55:56 +020044
45DMA_ATTR_NON_CONSISTENT
46-----------------------
47
48DMA_ATTR_NON_CONSISTENT lets the platform to choose to return either
49consistent or non-consistent memory as it sees fit. By using this API,
50you are guaranteeing to the platform that you have all the correct and
51necessary sync points for this memory in the driver.
Marek Szyprowskid5724f12012-05-16 15:20:37 +020052
53DMA_ATTR_NO_KERNEL_MAPPING
54--------------------------
55
56DMA_ATTR_NO_KERNEL_MAPPING lets the platform to avoid creating a kernel
57virtual mapping for the allocated buffer. On some architectures creating
58such mapping is non-trivial task and consumes very limited resources
59(like kernel virtual address space or dma consistent address space).
60Buffers allocated with this attribute can be only passed to user space
61by calling dma_mmap_attrs(). By using this API, you are guaranteeing
62that you won't dereference the pointer returned by dma_alloc_attr(). You
Xishi Qiubf038222013-08-30 17:39:28 +080063can treat it as a cookie that must be passed to dma_mmap_attrs() and
Marek Szyprowskid5724f12012-05-16 15:20:37 +020064dma_free_attrs(). Make sure that both of these also get this attribute
65set on each call.
66
67Since it is optional for platforms to implement
68DMA_ATTR_NO_KERNEL_MAPPING, those that do not will simply ignore the
69attribute and exhibit default behavior.
Marek Szyprowskibdf5e482012-06-06 14:46:44 +020070
71DMA_ATTR_SKIP_CPU_SYNC
72----------------------
73
74By default dma_map_{single,page,sg} functions family transfer a given
75buffer from CPU domain to device domain. Some advanced use cases might
76require sharing a buffer between more than one device. This requires
77having a mapping created separately for each device and is usually
78performed by calling dma_map_{single,page,sg} function more than once
79for the given buffer with device pointer to each device taking part in
80the buffer sharing. The first call transfers a buffer from 'CPU' domain
81to 'device' domain, what synchronizes CPU caches for the given region
82(usually it means that the cache has been flushed or invalidated
83depending on the dma direction). However, next calls to
84dma_map_{single,page,sg}() for other devices will perform exactly the
Xishi Qiubf038222013-08-30 17:39:28 +080085same synchronization operation on the CPU cache. CPU cache synchronization
Marek Szyprowskibdf5e482012-06-06 14:46:44 +020086might be a time consuming operation, especially if the buffers are
87large, so it is highly recommended to avoid it if possible.
88DMA_ATTR_SKIP_CPU_SYNC allows platform code to skip synchronization of
89the CPU cache for the given buffer assuming that it has been already
90transferred to 'device' domain. This attribute can be also used for
91dma_unmap_{single,page,sg} functions family to force buffer to stay in
92device domain after releasing a mapping for it. Use this attribute with
93care!
Marek Szyprowski4b9347d2012-10-15 16:03:51 +020094
95DMA_ATTR_FORCE_CONTIGUOUS
96-------------------------
97
98By default DMA-mapping subsystem is allowed to assemble the buffer
99allocated by dma_alloc_attrs() function from individual pages if it can
100be mapped as contiguous chunk into device dma address space. By
Carlos Garciac98be0c2014-04-04 22:31:00 -0400101specifying this attribute the allocated buffer is forced to be contiguous
Marek Szyprowski4b9347d2012-10-15 16:03:51 +0200102also in physical memory.
Doug Andersondf05c6f62016-01-29 23:07:26 +0100103
104DMA_ATTR_ALLOC_SINGLE_PAGES
105---------------------------
106
107This is a hint to the DMA-mapping subsystem that it's probably not worth
108the time to try to allocate memory to in a way that gives better TLB
109efficiency (AKA it's not worth trying to build the mapping out of larger
110pages). You might want to specify this if:
111- You know that the accesses to this memory won't thrash the TLB.
112 You might know that the accesses are likely to be sequential or
113 that they aren't sequential but it's unlikely you'll ping-pong
114 between many addresses that are likely to be in different physical
115 pages.
116- You know that the penalty of TLB misses while accessing the
117 memory will be small enough to be inconsequential. If you are
118 doing a heavy operation like decryption or decompression this
119 might be the case.
120- You know that the DMA mapping is fairly transitory. If you expect
121 the mapping to have a short lifetime then it may be worth it to
122 optimize allocation (avoid coming up with large pages) instead of
123 getting the slight performance win of larger pages.
124Setting this hint doesn't guarantee that you won't get huge pages, but it
125means that we won't try quite as hard to get them.
126
127NOTE: At the moment DMA_ATTR_ALLOC_SINGLE_PAGES is only implemented on ARM,
128though ARM64 patches will likely be posted soon.
Mauricio Faria de Oliveiraa9a62c92016-10-11 13:54:14 -0700129
130DMA_ATTR_NO_WARN
131----------------
132
133This tells the DMA-mapping subsystem to suppress allocation failure reports
134(similarly to __GFP_NOWARN).
135
136On some architectures allocation failures are reported with error messages
137to the system logs. Although this can help to identify and debug problems,
138drivers which handle failures (eg, retry later) have no problems with them,
139and can actually flood the system logs with error messages that aren't any
140problem at all, depending on the implementation of the retry mechanism.
141
142So, this provides a way for drivers to avoid those error messages on calls
143where allocation failures are not a problem, and shouldn't bother the logs.
144
145NOTE: At the moment DMA_ATTR_NO_WARN is only implemented on PowerPC.