Multifunction DAQ

cancel
Showing results for 
Search instead for 
Did you mean: 

NIDAQmx Python error ni daqmx Daq ResourceWarning: Task of name "_ unnamed Task<0>" was not explicitly closed

Solved!
Go to solution

I am using the NIDAQmx Python module to get data. I think there are 1-2 floating tasks that I can't close, and therefore can't continue to use the DAQ. I have seen a few posts regarding unexplicitly closed tasks, but none that would help me with my problem, as I can't re-find these tasks by name. Restarting as well as shutting down my computer has not helped. I am running code from a OneDrive folder on a Windows 10 PC, Python 3.10.13, nidaqmx-python version 1.0.1.

 

I am getting two errors, copied below:

 

File "C:\my\path\to\packages\miniconda3\envs\my_env\lib\site-packages\nidaqmx\_library_interpreter.py", line 3312, in get_read_attribute_uint32
raise DaqError(extended_error_info, error_code)
nidaqmx.errors.DaqError: The specified resource is reserved. The operation could not be completed as specified.

Task Name: _unnamedTask<1>

Status Code: -50103
self.check_for_error(error_code)
File "C:\my\path\to\packages\miniconda3\envs\my_env\lib\site-packages\nidaqmx\_library_interpreter.py", line 6412, in check_for_error
raise DaqError(extended_error_info, error_code)
nidaqmx.errors.DaqError: You only can get the specified property while the task is reserved, committed or while the task is running.
Reserve, commit or start the task prior to getting the property.

Property: DAQmx_Read_AvailSampPerChan

Task Name: _unnamedTask<1>

 

Status Code: -200983
C:\my\path\to\packages\miniconda3\envs\my_env\lib\site-packages\nidaqmx\task\_task.py:98: DaqResourceWarning: Task of name "_unnamedTask<0>" was not explicitly closed before it was destructed. Resources on the task device may still be reserved.
warnings.warn(
C:\my\path\to\packages\miniconda3\envs\my_env\lib\site-packages\nidaqmx\task\_task.py:98: DaqResourceWarning: Task of name "_unnamedTask<1>" was not explicitly closed before it was destructed. Resources on the task device may still be reserved.

A colleague of mine even took the DAQ I was using and tried it themselves, but found the same error. 

 

Thank you for your help!

 

 

 

0 Kudos
Message 1 of 6
(178 Views)

- have you named all your tasks ?

For example I am doing:

for device in nidaqmx.system.System.local().devices:
task = nidaqmx.Task("Task_" + device.name)
all_tasks.append(task)

 

- you seem to call a read available sample per channel API while your task may be stopped. Is it at start or stop of test ?

 

- if you keep a handle to your tasks, you can close them once you have stopped them explicity or performed a test with given duration (with given duration, you have APIs like wait_until_done()).

To be honest, I am not always fully clean with closing the tasks but I retry to close them when starting a new acquisition if there are still things pending then I create new tasks

 

0 Kudos
Message 2 of 6
(125 Views)
Solution
Accepted by topic author elchd

Thank you for your reply. I have solved my problem, I was erroneously instantiating a task object in the init function of a class, and instantiating another object with an `initialise_DAQ` member function later on. I believe this first mistake caused the creation of the unnamed task 0, which then came into conflict with the unnamed task 1 created for the same device/purpose. I fixed my code and do not get the same error anymore. 

 

To address your points, I am not manually naming my tasks. I only have one device/task, so I do not entirely believe this is necessary for my purposes. Taking this recent issue into consideration may make me revisit this, thank you for the suggestion.

 

I believe I was calling the read sample at the start of the test.

 

I have not be able to figure out how to "retry" closing the tasks, if I have closed out of the Python script. Part of the issue is a UI I have created overtop this, and if the UI runs into errors then I have to close out of it, typically without running the full close() procedure for the DAQ and my other equipment I am using. 

0 Kudos
Message 3 of 6
(105 Views)

- yes, naming task has no real purpose other than better monitoring what happens, like tracing. For example, I tend to also rename my processes or threads with prctl or pthread apis

 

- the avail_samp_per_chan error is then probably because the 2 unexpected tasks were not started while the handling of the data would be generic to all tasks

It is more difficult to handle at stop. I had to put a Lock to protect callback execution while I was trying to end acquisition from UI in a parallel thread

 

- when I mean retry, I mean to close tasks before creating a new acquisition. I keep track of all my tasks in an array, I close all of them when restarting, clearing my tracking and creating new tasks. Seems to work OK. When my app crashes or I close it out, it seem my PCIe 6353 tasks are closed correctly, it does not seem to complain (could be PCIe vs USB)

def configure_daqs():
    # create 1 task per card = device
    for task in all_tasks:
        task.close()
    all_tasks.clear()
    for device in nidaqmx.system.System.local().devices:
        task = nidaqmx.Task("Task_" + device.name)
all_tasks.append(task)

0 Kudos
Message 4 of 6
(90 Views)

I only have one device, so I don't need to create so many tasks. It just seems strange that I can "lose" tasks like this. Might be a result of using the python module rather than something like LABVIEW. Anyway, thank you for your help; in future/if I expand to more tasks or devices, I will endeavour to be more thorough with my task closings.

0 Kudos
Message 5 of 6
(43 Views)

Well, I looked for something to not lose tasks... and I did not find any API to list the existing tasks and nothing obvious in the source code, the task is created but not stored externally in a list (probably I came up with the same conclusion months ago when I did my app, thus explaining why I just store in a list the tasks I created)

 

the only device.tasks property only lists the persisted tasks (tasks saved in DAQMX if I understood well), which is not what I have been doing. 

 

So we have to ensure we track any call to a task creation API. Not optimal. Maybe someone knows a way to do it.

0 Kudos
Message 6 of 6
(35 Views)