Huge amount of work has been done on internals of the program, too many to list. All-in-all, such things tend to lead to a string of bugs, please, pay attention to any irregularities and report them, I will be closely monitoring the inbox.
Writing a script in ExecuteScript
node and accidentally removing it is a pain. Been there, done that.
This feature will allow you to easily rollback such changes on the spot. For now, Undo/Redo supports only nodes addition/removal, other nodes modifications will be added as soon as the mechanism will become stable enough.
IBlazorWindow
now has a new property TitleBarViewType` which allows to fully replace window title bar with whatever you wantPreviously, there was a single way to make Trees and Macros "tick" and "run" (correspondigly).
If you wanted to make the tree run when/while some hotkey was pressed, you had to use a separate Aura with HotkeyIsActive trigger in it.
In the new version you can just add that same hotkey in the Tree/Macro straight away, reducing number of operations and complexity of configuration drastically.
In original HotkeyIsActive triggers, there is a functionality, which allows to restrict hotkeys to a specific window/application/etc.
This mechanism is directly responsible for enabling/disabling hotkeys handling at any given moment. By adding one or multiple auras you can specify exact set of conditions, which must be met before Trigger will start doing anything.
For example, by linking Aura which has WindowIsActive in it, you will make Trigger react only to keys which were pressed while game window is active.
In more complex cases, you can link the trigger to some in-game condition, for example, you when you click RMB(part of HotkeyIsActive configuration) AND some powerful skill is off-cooldown(linked condition), instead of casting whatever is usually on RMB, you'll simulate key press of a button which casts that skill. A good example would be automating cast of Vaal Skills on Path of Exile - you have the usual version of a skill bound to some button which you spam and whenever Vaal version got enough souls, it will be automatically release without you having to remember about it.
In BTs, the equivalent will be adding Enabling Condition on the tree itself - just like you did it before. The hotkey will be intercepted only if Enabling Condition is met. We'll see how that approach goes - please send your feedback.
Very important new node in BTs ecosystem - it allows to conditionally interrupt already running nodes.
Usually, when some node is Running (like
Wait
or Until Success
), there is no chance to other nodes in the tree to do anything - EyeAuras just waits for the node to complete its work.
In contrast, Interrupter node, when executed, ALWAYS runs Condition(left) node, even if the Action(right) node is Running. Moreover, if Condition succeeds, instead of giving control to already Running Action, it will interrupt it from further activity.
This could be used to break long-running actions (e.g. Wait
) or make some loop(like Until Success
) stop on demand - the node is emmensely powerful given right conditions.
Very simple node, which checks the current state of the specific key. If it is pressed, the node returns Success.
Another node which is expected to be paired with long-running nodes such as
Until Success
- it allows you to put a maximum time which you "allow" the child node to Run. The first combination of nodes which comes to mind is something like this:
By combining
Timeout -> Until Success -> Image Search
you basically tell the tree Keep trying to find the image until you Succeed, but no longer than for 2 seconds
.
Started to work on startup time improvements and, more specifically, "cold" (first) start.
Here are startup times for the latest pack for GTA5 by @linx
- https://eyeauras.net/share/S202503032233289fPxqP4AA0N0
v8215 - 33.41s
v8224 - 28.89s
So the new version is approximately 15%
faster on the first launch.
Subsequent launches:
v8215 - 29.38s
v8224 - 27.10s
~8%
faster.
I will keep working on improving those stats.
Sometimes, you do not need ALL the functionality which is built into EyeAuras in your Packs. For example, if your pack does not use computer vision at all, it does not make much sense to load CV-related modules into the memory - this is waste of both CPU and RAM. With the new functionality around working with applications directly - such as reading memory - it is expected that there will be more and more cases where you just want to use EyeAuras as a platform for development and distribution own of your app.
This feature is intended to be used exactly for that purposes - you now can disable some parts of the application from loading. Those modules, which are blacklisted:
For now, we have two types of modules which could be excluded:
Blacklist is a part of Pack configuration. By default, everything is shipped, but you can disable two types of modules/
KeyLogin is a way to sign in using a one-time license key — no account creation required. This system is intended to be used along with Sublicenses to allow your users to access your cool Packs as quickly as humanly possible.
That is how the process looks right now
For end-user(the person who will be using the pack):
Download pack
That is it. I think this is as simple as it could get and we'll stay in that state for some time and see how it goes.
For you(author, who is creating the pack):
Yeah... for authors the process is not as straightforward. I will streamline it this year.
Allows to check that the color of a specific pixel(or region) matches selected one.
Probably one of the most useful nodes out there - fast, very easy to use and very flexible.
Allows to find image somewhere on the screen/in a selected region.
Added new node, which is intended to be used by those who are more comfortable with "classic" logic building.
New node in BTs and Macros, which allows you to run ML search over a region of a screen/window.
It is still missing a lot of options currently present in MLSearchTrigger such as Configende/IoU thresholds, inference method, number of threads etc - those will be added in the nearest future.
Note, that by itself this node does not do anything - you have to pair it with other nodes such as MLFindClass
or MouseMove
to get anything from it.
By default, it will pick the very first object and make it available for click via MouseMove
- for some very simple scenarios that will be enough, for more complex one - use MLFindClass
The node will return Success
only if there is at least one found prediction.
This node must be used in pair with MLSearch
and allows to pick some class(~object) from the output generated by MLSearch
- you can set some filtering parameters such as class name(s), minimum confidence threshold, size restrictions, etc. Those predictions which do not match any of those, will be filtered out.
Out of those predictions which will be left after filtering, you can make the node pick exactly one of them - it could be the very first in the list or the one with the highest confidence or even the one that is currently closest to the cursor.
The node will return Success
only if there is at least one found prediction that matches all the criteria.
Of course, both MLSearch and MLFindClass could be used in combination with all other nodes. Here is, for example, priority-based target selection
Let's welcome a new node which is accessible in Macros and BTs. It is the first one in the batch of nodes coming soon, nodes, focusing on computer-vision. Those nodes will be a direct replacement of triggers, which you're currently using in Auras. Migration process will take a lot of time and probably new Nodes will reach feature parity with corresponding Triggers only months from now, but eventually we'll get there.
Note, that Nodes DO NOT tick on their own. Image capture and analysis happens exactly at the same moment when the node gets executed.
Initially, performance of nodes could be worse than those that we had in Triggers - this is due to new mechanisms which were implemented for Computer Vision API which powers those new nodes.
New Nodes will be writing their results to Variables which are available via Bindings.
For now, there is a single variable CvLastFoundRegion
which will contain the last found region. This could be coordinates of a found pixel. Or coordinates of found image. Or ML class.
In MouseMove, there is now a new button, which allows to very quickly and easily use that variable as a source of coordinates.
Just click on the button, and, when executed, MouseMove will try to read coordinates from that variable.
Implementing a first prototype of the system which should allow to use CV API in BTs and Macros. The general idea is that now those methods which you call, be it ImageSearch/ColorSearch/etc, will cache underlying capture/processing mechanisms, which should make consequent similar calls from other parts of the program be much-much cheaper. We're talking seconds
VS milliseconds
here.
The mechanism tries to track which parts of the program use different parts of CV API and cache them accordingly. Monitor memory usage! Usually such systems tend to memory leak. A lot. Especially until properly tuned.
This API by itself is still very rough on edges and will be improved in upcoming releases.
This method is now part of Computer Vision API and allows to entirely bypass caching mechanism if you do not need it - e.g. your entire program is a single C# script. In that case it does not make sense to rely on EA caching mechanisms and pay extra (yet small) price for it
Clear()
method to MiniProfiler