- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
I’m currently getting a lot of timeout errors and delays processing the analysis. What GPU can I add to this? Please advise.
I’m currently getting a lot of timeout errors and delays processing the analysis. What GPU can I add to this? Please advise.
So I run a debian server, it’s a shitty little i5 4th gen of some sort with 8GB of RAM. It runs BI in a docker of Windows using Dockur, as well as docker stacks for mosquitto and deepstack, and other containers for my solar array and inverters.
I have BI AI pointing to the debian host machine’s IP with the port I used for deepstack container. This seems to be pretty good at object detection without any GPU and on a shitty little I3 processor that’s about a decade or more old.
I use BI because Frigate and any other FOSS just don’t have anything approaching the usefulness and ease of setup of BI. I’d love if there were an alternative, because I fucking loathe having a Windows install in my network, even if it is running as a docker container. But that’s not the case, so I pay for BI and use some mitigations for having to deal with Windows.
I can post the compose files if you think you might want to give this a try.
Yeah I used deepstack before, and it had a lot better detection times, but recently BI switched to using CodeProjectAI as the supported ai, so I moved over to that. It’s not as performant as Deepstack. Maybe I should try going back to deepstack even if it’s not officially supported.
That’s what I noticed and went back to deepstack. It integrates with no issues at all, just specify the port and let it go.