04-22-2009 08:37 AM
Hi,
I'm just wondering why PAE is not supported with DAQmx? I commented out the memory check in nikal.c and ran updateNIDrivers. Everything "seems" to be working fine. What misbehaviour should I be prepared for? I just put 5GB of RAM in this machine today and I'm hoping I'll be able to keep using it.
Thanks,
John
Solved! Go to Solution.
04-22-2009 08:45 AM
04-23-2009 01:36 PM
Are you running on a 32 bit or 64 bit OS (Im guessing you are on the 32 bit OS). OpenSuSE 11.1 is not officially supported and there is some known issues with NI-Kal when the computer has over 4GB of RAM. The link below briefly explains this issue.
NI-KAL Fails to Load When I Have Greater Than 4GB of RAM
I would highly recommend to join the Linux User Group. This is monitored by many experienced Linux users and allows you to ask questions, post tutorials, etc.
04-29-2009 09:40 AM
Hey ninevoltz,
The problem with PAE is that you have more than 4 GB of physically addressable memory. Most of our pci hardware is only capable of addressing 32 bits of memory, which means that if you want to perform a DMA operation the buffer that the device copies to/from must reside in the lower 32 bits of the physical address space. So in order to support PAE you need to make sure that all DMA buffers only contain 32 bit physical addresses.
Unfortunately this is harder than it sounds, especially when the driver wasn't designed with this in mind. There are two places that the DMA buffers could be allocated in the kernel, or in user-mode. If the buffer is allocated in the kernel it really depends on what API you use to allocate the memory. APIs like kmalloc() and dma_alloc_coherent() will give you 32 bit physically addressable physically contiguous memory. For small allocations they may work fine, but for larger allocations they may fail. If you use something like vmalloc to allocate the memory you can probably get larger buffers but those buffers may contain pages above the 32 bit boundary. Similarly if the DMA buffer is allocated in user-mode for zero-copy DMA then the buffer may contain physical pages above the 32-bit boundary.
You can work around some of these issues by using the right APIs, or by walking every DMA buffer and checking that every page is below the 32-bit boundary. Unfortunately these types of things hurt performance. So even if we fix our drivers to support 64-bit operating systems we likely won't make the same changes to our 32-bit drivers.
Lastly it really isn't possible to just fix this in something like NI-KAL. NI-KAL has no idea if memory is being allocated for DMA or for something else. You could make all of NI-KAL's memory allocation APIs always allocate 32-bit physically addressable memory (actually this isn't hard to do), but that still won't fix cases where DMA buffers are allocated in user-mode.
Shawn Bohrer
National Instruments
04-29-2009 10:54 AM
Shawn,
Thanks for the thoughtful reply. I always appreciate it when someone goes the extra mile to explain something rather than just say "you can't do that".
I did discover something interesting today though, when booting into Fedora 8 again. If the kernel is configured with CONFIG_HIGHMEM4G (not CONFIG_HIGHMEM64G) the nNIKAL100_initDriver memory check fails. If I comment the memory check out, everything loads fine and my devices will work ok too. With CONFIG_HIGHMEM64G (PAE), as you explained, it will not work. I took out 1G of my RAM so I now only have 4G.
Thanks again,
John
06-20-2018 06:13 PM
@Shawn B.Most of our pci hardware is only capable of addressing 32 bits of memory, which means that if you want to perform a DMA operation the buffer that the device copies to/from must reside in the lower 32 bits of the physical address space. So in order to support PAE you need to make sure that all DMA buffers only contain 32 bit physical addresses.
This is exactly why the DMA API has the dma mask - it tells how many address bits the device supports, so the allocator can pick the right region. (if necessary, the kernel can also migrate unpinned pages to make some room).
Unfortunately this is harder than it sounds, especially when the driver wasn't designed with this in mind.
Not using the DMA API ? Why so ?
There are two places that the DMA buffers could be allocated in the kernel, or in user-mode. If the buffer is allocated in the kernel it really depends on what API you use to allocate the memory. APIs like kmalloc() and dma_alloc_coherent() will give you 32 bit physically addressable physically contiguous memory.
Don't use kmalloc() - it doesn't know the device's constraints. Use dma_alloc_coherent() instead.
For small allocations they may work fine, but for larger allocations they may fail.
Use sg-lists w/ multiple buffers. Otherwise the kernel has to migrate lots of pages, and it might not find a large enough contigious physical address space. Oh, don't tell me, your pci-cards don't support sg-lists - that would be dillettantic.
Similarly if the DMA buffer is allocated in user-mode for zero-copy DMA then the buffer may contain physical pages above the 32-bit boundary.
Dont do that !
DMA buffers can be safely allocated from userland (well, except for UIO). It's just a matter of luck that it works on *some* platforms. (w/ iommu it becomes tricky). You'd have to re-alloc / migrate those pages from within the driver.
Let the driver do the allocations - by the DMA API.
In case you wanna do DMA between several devices, dma_buf is your friend.
So even if we fix our drivers to support 64-bit operating systems we likely won't make the same changes to our 32-bit drivers.
You have separate drivers just for these subarchs ? That's kinda funny.