I'm one of the co founders of Propolis, the autonomous QA system.
00:04
I'm going to give you a quick run through of how it works.
00:06
If you're on the launch flow, you should be able to set up a swarm immediately.
00:10
So let's log in.
00:12
the swarm is an autonomous test generation engine.
00:15
It's going to explore your app systematically, it's going to surface bugs, generate tests, and as it runs again and again, it will learn of the changes in your app as you're pushing new features, continuously refreshing the test that you have so you don't have to worry about maintenance.
00:30
let's start with the weights and biases platform, which is a pilot customer of ours, as the entry point.
00:37
If you want to provide documentation, you can, but it's not necessary.
00:40
You then give it the variables and files you need so you know if you want to give it addresses or files for it to use in the platform.
00:48
and you would set up a profile.
00:52
The profile is essentially a saved chrome state, so it can use your cookies.
00:57
this is more reliable and faster than credential based.
01:00
But you're, you're welcome to do that as well.
01:03
the rest is already set up for an initial pass.
01:06
All you need to do is launch the swarm.
01:08
The run is executing right now.
01:11
You can see the live exploration as it goes.
01:13
Agent navigates the page, interacts with elements, all that stuff.
01:16
You'll get an email when it's time to log in and see the results, but I wanted to show you a completed one.
01:22
So you can see here the vast array of tests that the platform generated.
01:29
All these green ones are tests.
01:30
You can see the actions that the agents took, the bugs that they found, and how this all worked.
01:37
Once, the run is complete, they will plop the tests in a proposed section.
01:43
you're going to review the tests which will give you a video of the agent actually walking through the platform and, and the test steps that it took to get there, including the check that it's making.
01:55
So the assertion that something happened.
01:58
you can accept or reject these based on relevance or accuracy.
02:01
and then they would run on certain triggers like commits or deploys, and those would show up in your test runs page.
02:10
this is a way for you to see the different actions the agents took and make sure that everything's going well.
02:19
while the swarm is running, it's also going to find bugs.
02:23
so a byproduct of the swarm is, you know, did I see a bug while I was going after a certain objective and servicing it.
02:32
Up to you there as well.
02:33
the alternative approach here is for you to build your own tests.
02:38
We have a few ways to do this.
02:39
You can prescribe an objective to the agent and help it walk through that.
02:43
You can upload a video of you actually taking steps and it will make a test based on that.
02:49
or you can and build it step by step yourself.
02:52
so the core value prop here is that the swarm generates tasks autonomously, services bugs during exploration.
03:00
these tools give you sort of a manual way to do that if you're not getting the tests you desperately need.
03:06
and the goal is just production ready tests without manual test authoring and a continuous loop of maintaining and auto generating new ones as your app evolves.