I'm anticipating having to move next summer, so now that I'm finally debt free, I'm actively trying to save every penny that I can so that when I move, I'm going to be able to purchase wherever I move outright.
I'm also working on ensuring that I have enough savings to be able to fund three people for 18 months in a lower-cost portion of the United States to build a game that I have designed. I may still have to work full-time elsewhere to fund this endeavor, but this is probably my last shot to see if I have what it takes to be in the games industry the way that I want to be.
Obviously, this is leading to me trying to scale back my expenses wherever possible and practical. As a result, the current Shacknews Slow Jam will probably be the last game jam that I contribute prizes to.
February 17, 2018
February 11, 2018
Electric Eye Lite - Threading Model
I'm going to quickly go over how the threading model has changed in Electric Eye over the last two years.
Originally, Electric Eye was single-threaded. The data flow was:
Thread #1: Get Video Frame → Extract Testable Frame → Get Audio Frame → Run Test Case → Update UI → Repeat
For our original purposes (reducing our range from 300ms to 80ms), this was fine, but it didn't scale to more complex test cases.
Our next threading model split in two: our UI/acquisition thread and our test case thread.
Thread #1: Get Video Frame → Extract Testable Frame → Get Audio Frame → Enqueue Frame → Update UI → Repeat
Thread #2: Dequeue Frame → Run Test Case → Repeat
Thread #2 used a consumer model based on a lock-free queue from C++ Concurrency In Action. However, we started running into issues when we switched over to using UMats in our OpenCV code. Using GPU resources on thread #2 were impacting our UI and causing frame time issues with our acquisition thread.
Our next threading model had us using three threads: acquisition, test case, and UI.
Thread #1: Get Video Frame → Extract Testable Frame → Get Audio Frame → Enqueue Frame → Repeat
Thread #2: Dequeue Frame → Run Test Case → Repeat
Thread #3: Update UI when possible
An early mistake made with this threading model was that we tried to get the video frame into a UMat at the end of thread #1 to speed up thread #2, but this led to us running into resource starvation issues (you can only have so many GPU resources allocated) and it still caused timing issues.
Our final threading model still has three threads, but we shifted where we extracted the frame.
Thread #1: Get Video Frame → Get Audio Frame → Enqueue Frame → Repeat
Thread #2: Dequeue Frame → Extract Testable Frame → Run Test Case → Repeat
Thread #3: Update UI when possible
We are actively ensuring that thread #1 is not using any GPU resources whatsoever. We get our video frame if available (with our exposure controls, this takes us ~3-5ms per frame), grab whatever audio came in during this iteration, generate a FrameData object, and enqueue it up in thread #1.
In thread #2, we extract out the testable frame using code similar to the warpPerspective code I spoke about before with one extra perspective fix and my patented curved screen code, turn the frame into a UMat, run the test case against the extracted frame and/or audio object that we have, and if we have any changes to the UI, we signal the UI thread that it needs to update now.
Thread #3 is just a standard UI thread. It does handle getting commands via IPC from our command-line tool as well, but it just routes them into standard UI commands.
Originally, Electric Eye was single-threaded. The data flow was:
Thread #1: Get Video Frame → Extract Testable Frame → Get Audio Frame → Run Test Case → Update UI → Repeat
For our original purposes (reducing our range from 300ms to 80ms), this was fine, but it didn't scale to more complex test cases.
Our next threading model split in two: our UI/acquisition thread and our test case thread.
Thread #1: Get Video Frame → Extract Testable Frame → Get Audio Frame → Enqueue Frame → Update UI → Repeat
Thread #2: Dequeue Frame → Run Test Case → Repeat
Thread #2 used a consumer model based on a lock-free queue from C++ Concurrency In Action. However, we started running into issues when we switched over to using UMats in our OpenCV code. Using GPU resources on thread #2 were impacting our UI and causing frame time issues with our acquisition thread.
Our next threading model had us using three threads: acquisition, test case, and UI.
Thread #1: Get Video Frame → Extract Testable Frame → Get Audio Frame → Enqueue Frame → Repeat
Thread #2: Dequeue Frame → Run Test Case → Repeat
Thread #3: Update UI when possible
An early mistake made with this threading model was that we tried to get the video frame into a UMat at the end of thread #1 to speed up thread #2, but this led to us running into resource starvation issues (you can only have so many GPU resources allocated) and it still caused timing issues.
Our final threading model still has three threads, but we shifted where we extracted the frame.
Thread #1: Get Video Frame → Get Audio Frame → Enqueue Frame → Repeat
Thread #2: Dequeue Frame → Extract Testable Frame → Run Test Case → Repeat
Thread #3: Update UI when possible
We are actively ensuring that thread #1 is not using any GPU resources whatsoever. We get our video frame if available (with our exposure controls, this takes us ~3-5ms per frame), grab whatever audio came in during this iteration, generate a FrameData object, and enqueue it up in thread #1.
In thread #2, we extract out the testable frame using code similar to the warpPerspective code I spoke about before with one extra perspective fix and my patented curved screen code, turn the frame into a UMat, run the test case against the extracted frame and/or audio object that we have, and if we have any changes to the UI, we signal the UI thread that it needs to update now.
Thread #3 is just a standard UI thread. It does handle getting commands via IPC from our command-line tool as well, but it just routes them into standard UI commands.
February 4, 2018
Electric Eye Lite - Introduction
It's still going to be some time before I'm going to be able to do a full source release of Electric Eye through work. Since Electric Eye was revealed to the world, we've done over 100 internal releases and over fifteen new releases to partners. We've dramatically reduced the error bars in our measurements, fixed a lot of bugs, and in general have made a very stable tool. As a result, we have a very stable but very messy codebase.
Over the next month, I'm going to be talking through a clean implementation of the non-patented parts of Electric Eye and walk through the creation of what will essentially be "Electric Eye Lite" or "EEL."
Over the next four posts, I'm going to be talking through each of the three threads inside the codebase (data acquisition, testing, UI), the lessons learned over the last two years of working on the tool, and finally bring it all together in a simple, clean codebase.
All the code will be over on Github licensed under BSD 3-clause.
Talk to you soon.
Over the next month, I'm going to be talking through a clean implementation of the non-patented parts of Electric Eye and walk through the creation of what will essentially be "Electric Eye Lite" or "EEL."
Over the next four posts, I'm going to be talking through each of the three threads inside the codebase (data acquisition, testing, UI), the lessons learned over the last two years of working on the tool, and finally bring it all together in a simple, clean codebase.
All the code will be over on Github licensed under BSD 3-clause.
Talk to you soon.
Subscribe to:
Posts (Atom)