5 Vinod Koul <vinod dot koul at intel.com>
7 .. note:: For DMA Engine usage in async_tx please see:
8 ``Documentation/crypto/async-tx-api.rst``
11 Below is a guide to device driver writers on how to use the Slave-DMA API of the
12 DMA Engine. This is applicable only for slave DMA usage only.
17 The slave DMA usage consists of following steps:
19 - Allocate a DMA slave channel
21 - Set slave and controller specific parameters
23 - Get a descriptor for transaction
25 - Submit the transaction
27 - Issue pending requests and wait for callback notification
29 The details of these operations are:
31 1. Allocate a DMA slave channel
33 Channel allocation is slightly different in the slave DMA context,
34 client drivers typically need a channel from a particular DMA
35 controller only and even in some cases a specific channel is desired.
36 To request a channel dma_request_chan() API is used.
42 struct dma_chan *dma_request_chan(struct device *dev, const char *name);
44 Which will find and return the ``name`` DMA channel associated with the 'dev'
45 device. The association is done via DT, ACPI or board file based
46 dma_slave_map matching table.
48 A channel allocated via this interface is exclusive to the caller,
49 until dma_release_channel() is called.
51 2. Set slave and controller specific parameters
53 Next step is always to pass some specific information to the DMA
54 driver. Most of the generic information which a slave DMA can use
55 is in struct dma_slave_config. This allows the clients to specify
56 DMA direction, DMA addresses, bus widths, DMA burst lengths etc
59 If some DMA controllers have more parameters to be sent then they
60 should try to embed struct dma_slave_config in their controller
61 specific structure. That gives flexibility to client to pass more
62 parameters, if required.
68 int dmaengine_slave_config(struct dma_chan *chan,
69 struct dma_slave_config *config)
71 Please see the dma_slave_config structure definition in dmaengine.h
72 for a detailed explanation of the struct members. Please note
73 that the 'direction' member will be going away as it duplicates the
74 direction given in the prepare call.
76 3. Get a descriptor for transaction
78 For slave usage the various modes of slave transfers supported by the
81 - slave_sg: DMA a list of scatter gather buffers from/to a peripheral
83 - dma_cyclic: Perform a cyclic DMA operation from/to a peripheral till the
84 operation is explicitly stopped.
86 - interleaved_dma: This is common to Slave as well as M2M clients. For slave
87 address of devices' fifo could be already known to the driver.
88 Various types of operations could be expressed by setting
89 appropriate values to the 'dma_interleaved_template' members. Cyclic
90 interleaved DMA transfers are also possible if supported by the channel by
91 setting the DMA_PREP_REPEAT transfer flag.
93 A non-NULL return of this transfer API represents a "descriptor" for
94 the given transaction.
100 struct dma_async_tx_descriptor *dmaengine_prep_slave_sg(
101 struct dma_chan *chan, struct scatterlist *sgl,
102 unsigned int sg_len, enum dma_data_direction direction,
103 unsigned long flags);
105 struct dma_async_tx_descriptor *dmaengine_prep_dma_cyclic(
106 struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
107 size_t period_len, enum dma_data_direction direction);
109 struct dma_async_tx_descriptor *dmaengine_prep_interleaved_dma(
110 struct dma_chan *chan, struct dma_interleaved_template *xt,
111 unsigned long flags);
113 The peripheral driver is expected to have mapped the scatterlist for
114 the DMA operation prior to calling dmaengine_prep_slave_sg(), and must
115 keep the scatterlist mapped until the DMA operation has completed.
116 The scatterlist must be mapped using the DMA struct device.
117 If a mapping needs to be synchronized later, dma_sync_*_for_*() must be
118 called using the DMA struct device, too.
119 So, normal setup should look like this:
123 nr_sg = dma_map_sg(chan->device->dev, sgl, sg_len);
127 desc = dmaengine_prep_slave_sg(chan, sgl, nr_sg, direction, flags);
129 Once a descriptor has been obtained, the callback information can be
130 added and the descriptor must then be submitted. Some DMA engine
131 drivers may hold a spinlock between a successful preparation and
132 submission so it is important that these two operations are closely
137 Although the async_tx API specifies that completion callback
138 routines cannot submit any new operations, this is not the
139 case for slave/cyclic DMA.
141 For slave DMA, the subsequent transaction may not be available
142 for submission prior to callback function being invoked, so
143 slave DMA callbacks are permitted to prepare and submit a new
146 For cyclic DMA, a callback function may wish to terminate the
147 DMA via dmaengine_terminate_async().
149 Therefore, it is important that DMA engine drivers drop any
150 locks before calling the callback function which may cause a
153 Note that callbacks will always be invoked from the DMA
154 engines tasklet, never from interrupt context.
156 **Optional: per descriptor metadata**
158 DMAengine provides two ways for metadata support.
162 The metadata buffer is allocated/provided by the client driver and it is
163 attached to the descriptor.
167 int dmaengine_desc_attach_metadata(struct dma_async_tx_descriptor *desc,
168 void *data, size_t len);
172 The metadata buffer is allocated/managed by the DMA driver. The client
173 driver can ask for the pointer, maximum size and the currently used size of
174 the metadata and can directly update or read it.
176 Becasue the DMA driver manages the memory area containing the metadata,
177 clients must make sure that they do not try to access or get the pointer
178 after their transfer completion callback has run for the descriptor.
179 If no completion callback has been defined for the transfer, then the
180 metadata must not be accessed after issue_pending.
181 In other words: if the aim is to read back metadata after the transfer is
182 completed, then the client must use completion callback.
186 void *dmaengine_desc_get_metadata_ptr(struct dma_async_tx_descriptor *desc,
187 size_t *payload_len, size_t *max_len);
189 int dmaengine_desc_set_metadata_len(struct dma_async_tx_descriptor *desc,
192 Client drivers can query if a given mode is supported with:
196 bool dmaengine_is_metadata_mode_supported(struct dma_chan *chan,
197 enum dma_desc_metadata_mode mode);
199 Depending on the used mode client drivers must follow different flow.
203 - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
205 1. prepare the descriptor (dmaengine_prep_*)
206 construct the metadata in the client's buffer
207 2. use dmaengine_desc_attach_metadata() to attach the buffer to the
209 3. submit the transfer
213 1. prepare the descriptor (dmaengine_prep_*)
214 2. use dmaengine_desc_attach_metadata() to attach the buffer to the
216 3. submit the transfer
217 4. when the transfer is completed, the metadata should be available in the
222 - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM:
224 1. prepare the descriptor (dmaengine_prep_*)
225 2. use dmaengine_desc_get_metadata_ptr() to get the pointer to the
226 engine's metadata area
227 3. update the metadata at the pointer
228 4. use dmaengine_desc_set_metadata_len() to tell the DMA engine the
229 amount of data the client has placed into the metadata buffer
230 5. submit the transfer
234 1. prepare the descriptor (dmaengine_prep_*)
235 2. submit the transfer
236 3. on transfer completion, use dmaengine_desc_get_metadata_ptr() to get
237 the pointer to the engine's metadata area
238 4. read out the metadata from the pointer
242 When DESC_METADATA_ENGINE mode is used the metadata area for the descriptor
243 is no longer valid after the transfer has been completed (valid up to the
244 point when the completion callback returns if used).
246 Mixed use of DESC_METADATA_CLIENT / DESC_METADATA_ENGINE is not allowed,
247 client drivers must use either of the modes per descriptor.
249 4. Submit the transaction
251 Once the descriptor has been prepared and the callback information
252 added, it must be placed on the DMA engine drivers pending queue.
258 dma_cookie_t dmaengine_submit(struct dma_async_tx_descriptor *desc)
260 This returns a cookie can be used to check the progress of DMA engine
261 activity via other DMA engine calls not covered in this document.
263 dmaengine_submit() will not start the DMA operation, it merely adds
264 it to the pending queue. For this, see step 5, dma_async_issue_pending.
268 After calling ``dmaengine_submit()`` the submitted transfer descriptor
269 (``struct dma_async_tx_descriptor``) belongs to the DMA engine.
270 Consequently, the client must consider invalid the pointer to that
273 5. Issue pending DMA requests and wait for callback notification
275 The transactions in the pending queue can be activated by calling the
276 issue_pending API. If channel is idle then the first transaction in
277 queue is started and subsequent ones queued up.
279 On completion of each DMA operation, the next in queue is started and
280 a tasklet triggered. The tasklet will then call the client driver
281 completion callback routine for notification, if set.
287 void dma_async_issue_pending(struct dma_chan *chan);
296 int dmaengine_terminate_sync(struct dma_chan *chan)
297 int dmaengine_terminate_async(struct dma_chan *chan)
298 int dmaengine_terminate_all(struct dma_chan *chan) /* DEPRECATED */
300 This causes all activity for the DMA channel to be stopped, and may
301 discard data in the DMA FIFO which hasn't been fully transferred.
302 No callback functions will be called for any incomplete transfers.
304 Two variants of this function are available.
306 dmaengine_terminate_async() might not wait until the DMA has been fully
307 stopped or until any running complete callbacks have finished. But it is
308 possible to call dmaengine_terminate_async() from atomic context or from
309 within a complete callback. dmaengine_synchronize() must be called before it
310 is safe to free the memory accessed by the DMA transfer or free resources
311 accessed from within the complete callback.
313 dmaengine_terminate_sync() will wait for the transfer and any running
314 complete callbacks to finish before it returns. But the function must not be
315 called from atomic context or from within a complete callback.
317 dmaengine_terminate_all() is deprecated and should not be used in new code.
323 int dmaengine_pause(struct dma_chan *chan)
325 This pauses activity on the DMA channel without data loss.
331 int dmaengine_resume(struct dma_chan *chan)
333 Resume a previously paused DMA channel. It is invalid to resume a
334 channel which is not currently paused.
336 4. Check Txn complete
340 enum dma_status dma_async_is_tx_complete(struct dma_chan *chan,
341 dma_cookie_t cookie, dma_cookie_t *last, dma_cookie_t *used)
343 This can be used to check the status of the channel. Please see
344 the documentation in include/linux/dmaengine.h for a more complete
345 description of this API.
347 This can be used in conjunction with dma_async_is_complete() and
348 the cookie returned from dmaengine_submit() to check for
349 completion of a specific DMA transaction.
353 Not all DMA engine drivers can return reliable information for
354 a running DMA channel. It is recommended that DMA engine users
355 pause or stop (via dmaengine_terminate_all()) the channel before
358 5. Synchronize termination API
362 void dmaengine_synchronize(struct dma_chan *chan)
364 Synchronize the termination of the DMA channel to the current context.
366 This function should be used after dmaengine_terminate_async() to synchronize
367 the termination of the DMA channel to the current context. The function will
368 wait for the transfer and any running complete callbacks to finish before it
371 If dmaengine_terminate_async() is used to stop the DMA channel this function
372 must be called before it is safe to free memory accessed by previously
373 submitted descriptors or to free any resources accessed within the complete
374 callback of previously submitted descriptors.
376 The behavior of this function is undefined if dma_async_issue_pending() has
377 been called between dmaengine_terminate_async() and this function.