11-17-2016 12:39 PM - edited 11-17-2016 12:39 PM
Wouldn't it be nice if we could just draw some code freehand and the LabVIEW compiler would make sense of it and turn it into LabVIEW code? (e.g. sqaure for FOR loop, circle for while loop, a x for a multiply primitive, etc.
For your entertainment, here's a google AI experiement. See how well it does with your drawings. 😄
(Make sure to turn on the speakers! :D)
11-17-2016 01:07 PM
11-17-2016 01:31 PM - edited 11-17-2016 01:32 PM
This is kinda fun. I was asked to draw "pool". Now I didn't know what kind of pool it meant so I went with the above ground type you swim in and I thought I did a pretty good job, but it didn't agree. I also thought my microphone was pretty good but it didn't get that but somehow got my harp which looked more like a fish with legs.
Unofficial Forum Rules and Guidelines
Get going with G! - LabVIEW Wiki.
17 Part Blog on Automotive CAN bus. - Hooovahh - LabVIEW Overlord
11-18-2016 09:56 AM
That was fun!
11-18-2016 04:47 PM
That was a lot of fun. I have no idea how it got string bean from my picture.
11-28-2016 03:30 PM
With how good Voice to Text has become in the last couple of years, I'm more excited to try that. One of my pet projects next year is going to be to rewrite LVSpeak (or my version of it anyway) using the Cortana API.
It would be interesting to link the commands to a Bayesian type neural network and have it start "learning" your development style and predicting what you will place next given your block diagram. I mean imagine... you make a new VI
User: "LabVIEW - Add error terminals"
LabVIEW: "Would you like me to connect them through an error case structure?"
User: "Yes."
You could even have it automatically take action if the Bayesian hypothesis was a high enough probability instead of asking the user. If for example 98% of the time I add a case structure after adding error terminals it will automatically do it as part of the "Add error terminals" command. And this could be completely "learned".
The trick would be making calls through LV scripting and looking at all BD objects and how they are connected - and then predicting what the next likely BD item would be based on history. Obviously, the more complex a BD got the less likely it would be able to make a valid prediction - but hey that is just an excuse to enforce good VI size practices.
I obviously won't have time to do it (or even attempt it), but it is still fun to think about 🙂 I will at least attempt the first part!