I’ve been trying to track down the source of a current consumption issue, however the displayed data from the Arc/Otii shows lots of high di/dt current spikes. Also these are reflected to some extent in the voltage plot also.
I’m not yet certain if this is a measurement artifact or real, as it suggests that there are 70mA spikes every 26ms. It doesn’t occur in the same place in the waveform either which is suggestive of possibly some instability in the AFE of the Otii.
Are you able to share the schematics for the AFE section?
Or is this a known problem with a proposed solution already?
Could you do a short recording showing these spikes, save the project and add this to this thread and I will take a look and get back to you.
Sure, I’m just running a discharge profile test with a current waveform similar to what we typically see just to confirm system behaviour. This will take a few days to complete but I’ll send a recording of what I mean through once this test has finished.
P.S. While I think of it, for the next revision of software, can some basic cursors be added for the Y scale so delta measurements can be made more easily. Even if you simply read out the plot coordinates for the mouse cursor position…
Great, take the time you need and I will check the project when you send it.
I will put your idea into the list of improvements, thank you for your feedback!
As this prolonged discharge test has been running (using a slightly modified version of the battery profiling lua script), the speed of the UI has gotten progressively slower. What is causing this? Is there a better way to be doing this?
In fact I just went to move the lua script window to see what iteration it was up to and following an “Application not responding” title bar message, there is now a error flagged (see below). The script has stopped running however the Otii is still happily ticking over like its supposed to…
line 62: attempt to index a nil value (local ‘battery_data’)
I’ve simply been copying this output pane into excel and having Matlab do some crunching on this data. However now I’ll need to pull it directly from the recording file… or stop this test mid way through (after running for about 40hrs) and re-run it (with no guarantee it will complete).
Can you share the file format of the recording file so I can extract this data and process it directly from raw form?
Or do I need to nut out the scripting to output this pane directly to csv?
Could you attach your modified Lua script to this thread (I’ve enabled upload of .lua files) and we will see if we can figure out why you get an error on line 62.
While Otii is making a recording it creates temporary files in your document folder. On Windows that would be My Documents/otii/.openproject. You will find a number of binary .dat files and a .json file which is the project description. If these files have recent modified times then your data is still being saved. You can try to recover these files to extract your data. Start by making a copy of .openproject folder so you can restore it if needed. After you have made a copy you can forcefully terminate Otii. Next time you start it it will try to recover the project. When recovered you can right click on the recording and export the data as csv if you want to import it into matlab.
If this fails you can try reading the .dat file. It is a long sequence of doubles with your actual data. It is not time coded so the sample rate indicated in the json file determines the timestamp of each sample.
The output in the ScriptView window is unfortunately never saved anywhere by the application so if you can’t access it from the UI it’s unfortunately gone.
There might be two reasons why the UI is not responding. I’m not exactly sure about your project setup but if you have graphs visible there’s a lot of samples in them after 40 hours. We know this will affect performance and we are investigating how to improve in this area. What we however suspect is that the ScriptView is not optimal at having a very large output, and I suspect that you have more than 20000 lines in there. This was something we were not aware of previously but when running some tests with large data in the ScriptView we noticed a major slowdown.
We would recommend that you do this kind of operations with otiicli instead as it then will not draw any UI. It should be no problem to run for several days. You might want to pipe the output to a file so you get all the history (so your terminal’s buffer doesn’t limit you). Or write to a file directly from the Lua script.
The only mods to the LUA script are in the current profile definition.
The script itself is the standard one from the scripting help.
As it happens, I got to the office today to find Windows had restarted after updating some drivers. So the files may be the only record of the test occurring. I’m about to see if I can recover some data from the 7+ day test…
The Otii looks to have recorded this - I now just have to get the file(s) into some form that is openable (DAT files are between 2-10GB and the CSV is 40GB).
Yes I had the plot window up and open. And if its trying to display several GB of data… not surprisingly it runs slow.
Fortunately your app has kept the data as recoverable… So thanks!!!
I would just like to confirm that you managed to recover your project by opening Otii again. I assume this is how your CSV file was created.
To make the data more manageable there are some things you can do.
If you aren’t interested in all of the data but only some special time interval you can crop the data. I suspect it will be quite slow to do a selection on that amount of data but when done you can right click and select crop. Please note that if you save your project after this the data you have cropped away is gone.
You can downsample the data to reduce it’s size. Again, if you save your project after doing this you will loose data. When using downsample you will input a factor to downsample with. If you input 4 then the resulting data will be a quarter the size. It will take 4 samples and keep the average of them so please note that if you have a short spike in the graph it will be flattened out. So if you are looking for voltage drops when applying short current pulses this is not optimal.
When the data is open in Otii you’re able to access it from the script window. So you can write a lua script that will do filtering of the type that you need, the lua script can also save the things you are interested in into a csv file and possibly greatly reduce the size of the csv file.
You can use an external application (python script for example) to process either the .dat file or the .csv so you can extract the information that is relevant to you.
I hope that you will be able to extract the data you are interested in. Please let us know how it goes!
After nearly losing 7 days worth of data, I now have a Matlab script that will open a file of any size - 30, 40GB no problem at all. It does take some time, however it doesn’t lose any fidelity of the data, and I can re-run if I need to add a metric into the functionality. I put in a % progress field in today so I could see how much longer it had to run…
It uses a Matlab datastore entity to load an arbitrary number of values into, and it then proceeds to process and plot the loaded data. I am currently displaying the block voltage and current waveforms, then using an falling-edge detection method to pick up each individual transmit pulse (from the current as this is ~constant). I plot each TX pulse from each block, ignoring any leading edge data left over from the previous block.
I then extract the relevant information from each block/TX pulse and plot that. From this I then calculate out some values of interest and output those to another file (which is only ~100kB).
I am then planning on correlating the Otii data with that from an external temperature probe (though I could also use our existing temp sense circuit but just wired to the Otiii ADC/Sense pins - The use of which needs some more documentation. i.e. example application circuit diagrams).
Once I have temperature (and the test chamber variation) incorporated in, I can then create a set of look up tables to map loaded and unloaded battery voltage to capacity across all states of charge and our temperature ranges, and therefore have an accurate battery model.
Can you clarify the connections for Sense+/Sense- and ADC+/ADC- pins please?
Thanks again for your help.
I’m glad to hear that you were able to recover your data.
Regarding the ADC+/ADC- pins. I would recommend you to have a look at Connecting the Hardware section in the bundled help. There is some information of using the ADC connections. I’ll summarize it here:
You can connect output voltage from your temperature probe to ADC+ pin. You can then enable that channel from the UI in the Voltage tab (or enable the channel ‘ac’ from script). The value reported is the difference between ADC+ and AGND. Please note that you should at most supply 5V to ADC+.
Since 1.5.2 software and 1.0.6 firmware you can also enable the sense+/- pins using ‘sp’ and ‘sn’ channels from script. They are not exposed in the UI yet. They are however not calibrated in factory as the ADC+ pin is.
If I understood you correctly I think the ADC+ pin will suffice for now. If you want to use the Sense pins as well let me know and I’ll get back to you with instructions for calibrating them. We intend to add this to the UI in a future release.
Hi again Christer,
I have hacked one of our boards to expose the temperature sensing NTC voltage divider, but I don’t want to power the whole board up, I just need an appropriate representative thermal mass.
I want to correlate chamber temperature (and its thermal regulation cycles) with the temperature reported by our device (uninfluenced by FW and unit behaviour).
The 5V pin on the expansion port doesn’t output 5V, and there is no way to set a GPIO pin to act as a supply pin from the UI (ideally it is a controlled supply - such that it can also model a battery profile to see how the temp/ADC respond to a variable reference).
The only way I can see to do this is via scripting - i.e. expansion.lua
But how much of this do I need? This?
local devices = otii.get_devices()
assert(#devices > 0, “No available devices”)
local box = otii.open_device(devices.id)
assert(box ~= nil, “No available otii”)
I’ll have a play today to see if I can get it to run how I want it to - but if you can answer this?
This functionality really needs to be exposed via the UI, at least the basic stuff.
OK, so you only want to power your NTC at your board, not also reading it.
It is also possible to read the NTC by connecting ADC+ and read the voltage.
The GPO is possible to supply up to 50mA but there is also restrictions in the total power so I recommend that you keep it lower than this.
Yes, your script is partially right but you can leave out “box:set_adc_range(3000)” as this command has been removed, thank you for reminding us, we will update the script help regarding this.
When enabling the GPO use:
If you want to read the voltage on ADC+ pin, you can use
This value is measured in relation to AGND so make sure that you connect this also.
You can also graph this by ticking in the box ADC voltage in the Voltage tab in Device Settings.
Regarding the UI, we are working with increased UI functionality so that you will be able to graph all GPI, GPO, and SENSE pins.
We are also working with the +5V output pin on the expansion port.
The expansion port voltage is already exposed in the UI as Digital voltage level in the supply tab in Device Settings.