Crime Prevention have partnered with New York University (NYU) to learn about and share with communities innovative approaches to community development that could apply in Victoria.

On 6 November 2020, we held a webinar with NYU about the BetaGov learning model (External link) – a process for rapid, robust practitioner-led evaluation. 

The webinar was presented by the founding director of NYU BetaGov, Professor Angela Hawken (External link), who demonstrated how organisations could use the BetaGov approach to become  ‘pracademics’, through a practical, evidence-based approach to innovative research.  

The webinar was attended by Victorian councils and community organisations, and attendees heard that anyone can be a ‘pracademic’, whether they are new or experienced in research.  

Watch a recording of the webinar. 

 

A transcript of the webinar is also available here

[Title: Victoria State government / Justice and Community Safety]

[BetaGov Innovate with US - NYU]

Angela Hawken

Thank you so much for the invitation to be here, I really appreciate this opportunity to spend an hour of your morning with you.

The purpose of BetaGov was exactly what you heard during that introduction is how do you get people in communities integrated into the research process, and practitioners who are working with communities, service agencies, people in government agencies, people in non-profit, are the closest to the communities they’re serving and therefore often closest to the problems and can deliver solutions.

So, often we tend to seek so far out for our solutions but they might be right under our nose.

So the goal was to really think about ways to make it easier for people to start solving problems right where they’re at, and that really was the goal of BetaGov.

So I’m going to see if I can progress here.

[Slide: Research for Impact NYU Marron Institute]

We’re in house, so we are a non-profit, we’re often asked what is this thing BetaGov, we are a non-profit by virtue of our affiliation, we are at NYU, so we are really a research group house, but in Marron Institute for Urban Management.

We’re a little unusual for an academic team in that our only responsibility every year when we report to the provost is impact.

We have to demonstrate to the provost what is working differently and adding better as a result of the work that we do and that really informs everything we do in the communities that we work with and the agencies that we work with and it does.

It’s our timeline, we need to make sure that we can help agencies try something and ideally get some sense of whether there’s a signal, whether it’s a thumbs up or a thumbs down, but that there is some signal around what they are working on.

[Slide: What works?]

So what’s working?

You know the truth is we don’t know, and mostly we don’t know because – we know some but we don’t know a lot, and the reason we don’t know a lot is that we’ve made it very difficult to learn.

[Slide: Most practices have never been assessed]

So most of the practices that we have in place have never really been assessed, so we don’t really have good data to indicate whether that’s actually a good idea or not, so many maybe harmful policies and practices are persisting because we just don’t know.

[Slide: Evaluations usually involve professional researchers, external funders, red tape, and long timelines]

So, evaluations especially if you think about the programs and practices that we’re using in our communities, and they usually involve professional outside researchers, external funders have to pay for that effort, lots of red tape which slows us down a lot, and they’re prolonging timelines.

[Slide: Policies intended to make us smarter, safer, or healthier are based more on history]

So you can probably see that it’s supposed to make us smarter, safer and healthier, usually are more based on history.

We do it this way because we’ve always done it this way, there’s some manual somewhere that tells us to do it this way, so it’s inertia that comes along on just persisting in doing what we’ve always done, and some of those are good things to do, sometimes you want this history to stick but sometimes you don’t.

And other things we do are based on our guts, right, our guts are telling us that this is a good idea, this feels right, and while you want what you’re doing to feel right it has to pass some sense that this is the decent thing to do but that it feels good doesn’t necessarily mean that it does good.

So even if something feels like a great idea go ahead and test it, get some data around that to convince yourselves and the communities that you’re serving that this is indeed a good idea.

We have a long list of pilots that we’ve worked on with agencies that just worked so – that they thought it was obvious that this was a great idea until we actually took the data around it or collected data around it and very quickly that transitioned to something else.

[Slide: EBP - We love EBPs and wish there were more. BUT …]

So evidence-based programs and practices, I’m sure you’re all steeped in this thinking and this approach, and of course we’re a university-base so we love, love, love, see our hearts, we love, love, love an evidence-base and growing that body of evidence.

[Slide: There are EBPs and “e”BPs. Hard for practitioners to know which is which]

But there’s also a cautionary tale there and that is that some evidence-based programs and practices are kind of ‘E’, we have to worry about that ‘E’ and the quality of that ‘E’.

Some evidence-based practices are based on truly phenomenal data and solid rigorous work, and others less so, and sometimes it’s hard for a practitioner to differentiate which EBP has a really strong evidence-base behind it and which does not.

[Slide: The challenge of transferability]

The other challenge we find is what we call the challenge of transferability.

Things are different, local communities are different, so that if you’re selecting a practice, even it’s based on really outstanding good quality evidence and you’re implementing it in your community, it might not work the way it did in that community over there where it was scientifically tested in.

So that even if you’re taking on something that is deemed an evidence-based program or practice, even with good quality evidence behind it, you want to make sure you’re still keeping data around that new innovation, of that new practice, when you’re trying it or testing it, or implementing it in your own community.

[Slide: Time for something new]

And then I always tell everybody, even if we’re relying solely on EBP and existing stuff of fantastic evidence-based programs and practices, always please keep room for something new because things are changing all the time, things can be improved, there’s always room for improvement so don’t shut down that creativity just because there’s something on the shelf that you can take.

Keep space for your staff and the public you are serving and yourself, just keep thinking in terms of this problem-solving approach of looking around you and seeing what you can be doing differently.

[Slide: If we are serious about wanting to know what works, we need to make it easier to learn]

So if we become very serious about wanting to know what’s working and what isn’t, we have to make it easier to learn.

[Slide: Home-grown EBPs and EGPs]

So this is our big plea to everybody we’re working with now, is really think about a home-grown evidence.

So there’s a body of – in research literature a body of evidence-based programs and practices, but what about where you’re at, what about in your city or your town or the jurisdiction you’re in, how about creating a home-grown evidence-base specific to your community?

And also I like the language now of what we call EBPs, evidence generating practices, that when you’re trying something new but you’re connecting evidence around that, you’re generating new evidence as you go.

[Slide: Turtle photograph]

So this was the challenge that we saw and the reason why we created BetaGov is that traditional academic research we were involved in was taking a very long time, and there were lots of reasons why at every step along the way there were these – and the reasons for it being slowed down, and some of them were good but some of them didn’t need to be that way.

So research in Criminal Justice for example, we started to look at the literature, four years ago we were looking at our federally funded studies, while on average a crown study was taking about five years from the time someone submitted the idea until they wrote about it.

But our police chiefs were turning over every 18 months, and our secretaries of correction running our prisons was turning over more quickly than that, so when the research finally came out it just wasn’t relevant to the person who was making the decisions because they had moved on several times since then.

For health, for those of you interested in health it was even worse.

On average around seven years from the time it was submitted until the results are published.

So the timelines are starting to tighten up a little bit, but there’s still lots to do in terms of making sure we can help decision makers, or help you figure out what’s going on in your backyard on a slightly more aggressive timeline than that.

[Slide: Money photograph - Monopoly game]

The other issue is research costs money, traditional research, the more kind of heavy academic research costs a lot of money you can only do a little bit of it and when you can only do a little bit of it you create a monopoly, a monopoly over who gets to weigh in on what gets tested, and if you don’t get to control those research resources then you never get to have a voice in figuring out something new that might work.

So we want to shatter that monopoly, make it easy for anybody who’s working in a public-minded enterprise to be able to have a good idea, raise a hand and let’s try this, can we try this.

[Slide: Our approach to fostering learning organisations - Pracademia]

So our approach to fostering – it’s really about fostering learning organisations.

How do we create a culture within organisations where staff are empowered to learn, have the tools to learn, and we thought about doing this with something we call Pracademia?

Now we actually formalised this profit at NYU, we have Pracademic credentials, so our Pracademics who work with us on the study, and Pracademia is just this idea of it’s a practitioner who is also a researcher, that they’re doing research even if they don’t have a formal background in research they’re becoming a part of this learning apparatus within their organisation.

[Slide: Pracademia makes research practitioner-centred]

So Pracademic is just practitioners also involved with research.

The idea with Pracademia is really to kind of have a kind of tangential line, we have our formal traditional academic enterprise which is extremely valuable.

We’re not suggesting that that should go away.

It’s great value in the slower, more expensive, traditional academic enterprise too, but we thought about Pracademia as being something on the side that’s really complementary to that, but it’s much more practitioner-centred, it’s more nimble and can inform the other more traditional academic approach of strategies that might be worth looking at with those much bigger heftier studies.

So we assist Pracademics and we approach so they can do this learning where they’re at, and really promote this approach of kind of data-driven innovation and testing.

And that includes what we call pragmatic randomised controlled trials or RCTs.

[Slide: Sounds complicated. It isn’t. With a little help, anyone can do research]

Now you say RCT and randomised controlled trial, and that sounds very complicated but the truth is it isn’t, and that with just a little bit of help anybody can do research and that absolutely means you too.

And even if you do or don’t have a – if you have a great background in research that’s great, we have lots of Pracademics who do, but if you don’t that’s great too, we have lots of Pracademics who don’t.

[Slide: Our mission, fuelling organisations that learn]

So our mission really is just this fuelling of organisations that learn and empower their people to learn.

And there are several pieces to that, there’s this idea of just raising the awareness of the value of data.

We always say – this actually Michael Bloomberg always used to say this and we still learn from him, “It’s easier to manage if if you can measure it.”

So if there’s data around something it’s much easier to know that you’re headed in the right direction, or to know where to do it.

[Slide: Raising awareness of the value of data]

But to be valued data has also got to be shared and used, so your staff or your agency will be better at collecting that data, ingesting the data, people who’ll do that extra work putting in the data, if they know it’s going somewhere.

So reporting back on data is absolutely essential and there’s some really good research that even in data centres about how when data is reported back, or the results of that data is shown to the people who are responsible for collecting data, there’s a much better asset job of inputting the data.

So make sure you have like datarific extravaganzas, make sure people know data’s important, that they should collect it and it’s something that’s really valued and used.

[Slide: Pilot test practices that have been shown to work elsewhere and test home-grown ideas]

We also really like to work with organisations that are pilot testing practices that have been shown to work elsewhere that might work in their backyard.

But then even more than that we love this idea of working with organisation to pilot test something that is completely home-grown, is the initiative of someone in their own backyard or in their own organisations.

[Slide: Organisations that nurture local talent]

We love this idea of encouraging organisations to nurture their own local talent too.

We often find that they are these little mini frustrated citizen scientists in our agencies, and just with a little bit of encouragement this extraordinary kind of body of research can flourish with organisations.

[Slide: Staff Innovations]

The idea is to really just let staff know that their ideas are valued, their participation in research is valued, and to ask them to look around their workplaces and to be empowered to ask the question of is there something we can do here, and especially in COVID, a post-COVID world with budget, Cath this is a reasonable one to ask staff, is there something we can do here that could be cost-cutting without compromising our programing or our service delivery at all.

And we see lots of those sorts of examples where people make these submissions and just hey, we can do this and this is going to save us, you know, x number of dollars and it really doesn’t make any difference at all.

Or look around your workplace and think about our programing, what might we be able to do in our programing that you think could make a difference.

So basically empower them to ask the ‘what if’ question.

What if we did this differently?

[Slide: Client innovations]

And then also clients, the community that you’re serving, well they know a lot about themselves.

They know a lot about what’s going on in their local jurisdictions, their local areas.

Reach out to them and solicit their voice too of what do you think we should be trying, what do you think we should do and if you listen it’s really inspiring what we can learn from them.

[Slide: Data-driven innovations]

And especially if you listen know that their ideas will be, you know, will really become candidates, thoughtfully become candidates with something that might be tried.

And then finally data-driven innovation, we really do like to start engaging by if there is data looking at what the data has to say.

Sometimes we’ve found that we work with an organisation and they’re solving the wrong problem, and if they had spent a little time understanding their data, clearly recognise that and could have saved us all a bunch of time, because you’ve got staff that are pointing in the right direction to begin with for the candidate’s solution at solving an actually ready-identifiable problem.

So fear not your data, embrace your data, bring in your data, it’s often very telling about some direction.

And then also it tells you where you’re missing data or don’t have data, and being a data desert, you know, you have to start somewhere, it just creates the first data if you don’t have any, so a little data dive on the front end can go a long way.

[Slide: What is success?]

And then I think very important is what are finishing to, what are you hoping to drive towards, and that really starts with a conversation, and it’s just an honest conversation within your offices of what does success look like for us, what do we mean by success?

Because the challenge we often find is that, especially when it gets to evaluation work and figuring out if something’s working or not, is we tend to count what’s convenient, we tend to count what we have.

So if there’s data that’s what the outcome measures going to be, so the data is.

And we need to be a little careful of that, sometimes the important things are in our data and we can go at it, but sometimes people will orient towards what you’re counting too, and in a way that might have unintended consequences so you have to be a little careful of that.

And we had some horrible examples recently of a very well-intended prosecutorial reform project of another university that was put out that was evaluating prosecutors in certain dimensions, and actually ended up having the reverse effect because prosecutors could be potentially getting to an outcome measure that actually counted to what that proof was hoping to do, so we have to be careful what we count and know what it is we want to think of as a measure of success.

So starting with that conversation of what success looks like and then really thinking about data collection.

[Slide: Decide what you want to test]

And so you’ve decided you want to go for it, you want to get it to the learning – the learning cap on, so decide what it is you want to test.

Is it some program, is it an existing program, is it a new program, do you just want to test something that you’re already planning to do and you want to put some data around that, is it some process that can be improved.

[Slide: How to test?]

There’s lots of great opportunities to do process improvement studies.

And then how are we going to go about testing?

I’m going to walk you through some examples in the session, but always do it when we start a conversation with a new group that we’re working with, is really think about what is the most rigorous method you can use that is feasible.

So if it doesn’t pass that it’s practical, we can actually do this test and take it off the table, it has to be feasible, it has to be ethical, obviously it’s an obvious one but you really have to through that full process and some of you will have, from a policy respect, other’s won’t, but we’ll always make sure there’s this guiding ethical ends in everything that’s been done.

And then can we process insights on a timeline that is relevant?

Is this an opportunity to do some learning in a timeline that makes sense for some decision maker or for that organisation?

[Slide: Study Groups]

And then very importantly is when we’re thinking about doing evaluation research is where is the comparison going to come from.

Now, in a randomised controlled trial that’s nicely packaged for us because we have an intervention group and a control group, but sometimes you can’t ethically find a way to do a randomised controlled trial or it’s just not feasible, in which case we have to find out where’s that other apple, we want to make sure we have an apples to apples comparison, where is that other apple going to be coming from, what are we going to comparing against because we really want to make sure draw conclusions that don’t misinform us.

[Slide: Study duration]

And then another conversation is around duration, how long does this have to run for?

How long do we have to have this study in place for us to be able to get the right sort of signal, a signal that indicates whether this is working or not?

And we have some studies that lasted hours, we just had to run them for hours.

Others were a couple of days, we did mediate one on an implicit vice study with some policing that lasted two days.

We had a priming study that lasted just a few days.

Sometimes it’s weeks.

Most typically our studies last a few months but others, there’s no point to do them without waiting for years.

So it really depends on the make sure of the work that you want to do you’d have to sit down in your teams and really think through what makes most sense given the issue of the topic that you are studying.

[Slide: Compare outcomes]

And then at the end of that outcome period, a follow up period to really think about how does intervention do versus how did control do.

Now this is where we depart from the traditional academic model.

In the more traditional academic model you don’t want to look at outcomes until the end of the study and that’s because well, what if looking at outcomes might make you feel more enthusiastic about one intervention goes to the other, and you kind of hold that study in place until the end.

We really increasingly recommend that you do keep an eye on study outcomes if you have them, if you have them easily available and it’s easy to do along the course, because we’ve often found things where agencies went into trying something that they thought were no-brainers, this is an obviously good idea, and by the time we got to two months, two or three we were shutting it down.

And they would have, in a more traditional academic study, have waited until year three.

You can do half, right, if you’re testing something and you’re trying something, or you think you’re starting a new program that’s going to work wonders and it doesn’t, you want to be able to responsively say after three months, you know, we had good intentions here but this was not something that was serving our community well, we either need to modify this or we need to do something differently.

And not everyone shares that opinion, that’s an opinion we have, we do like to look at data at regular intervals if it’s available to make sure that we are on course towards an improved outcome for your community.

[Slide: Example: The “Chill Plan”]

So here’s an example just to get to something quite specific, and I’ll walk you through a couple of examples and then talk a little bit about the detail of how kind of the BetaGov process works.

So this is an example, this is an example, it’s called the “Chill Plan”.

It’s actually one that I’m very fond of so I like to talk about this one.

And the setting here is a close-security, for those of you who aren’t familiar with this, close-security means – it’s a nicer way of saying very high security, so it’s like maximum security, diagnostic centre for women with psychiatric disorders.

So these were women who have serious psychiatric disorders and there’s lots of misbehaviour, in fact a lot of violence in these living units, and the problem was this large number of misconducts.

We were brought in to work with this agency to really reduce the use of kind of isolated housing, keeping women in congurate programming, keeping them able to be engaging with others, it’s very helpful for their treatment plan, and when the misbehaviours accumulate too high it becomes very tough for everybody because everybody has be separated from each other.

Because they started the test a strategy that would reduce the number of misconducts to keep everybody safe and also on path for their programming, the response was that they got together and we love collaborative design where people come together, where people who are on the security side making sure the women are safe, people from mental health, the counsellors, all the staff came together, input from women who had been in this sort of programming unit, all came together to design a home-grown program that they were going to put to a test to see if this new program would reduce the misconduct.

[Slide: The Chill Plan]

And what the innovation was were the crisis-prevention plan that really empowers women to pre-emptively manage their own anxiety with their own personal response plan.

So this was truly an innovation, in fact I don’t of anyone like it in any of the facilities we’ve worked with, and for those of you who’ve ever had a birth plan, or know of someone who has had a birthing plan, essentially you go to the hospital if you’re about to have a baby and the doctors know how you would like your birth plan to unfold, and that gives you a little bit of relief knowing that I know what’s going to happen.

With the Chill Plan the women knew exactly, they could trigger it, they could say I would you to invoke my Chill Plan, this is what I would like you to do when I’m experiencing and increase in anxiety.

I can feel I’m escalating this is how I would like you to respond.

Well, they tested it, we helped them run that experiment, they tested it with 238 women who were entering the units, they were randomised to be informed of the Chill Plan, given an opportunity to draft their Chill Plan.

Seventy percent of the women chose to do so, this was what’s called an intentoried randomised controlled trial, so whether they draft a Chill Plan or not, when they had the offer they were counted as the intervention group and they were compared against the control group.

And the outcome measure there was misconduct, which was really maintained in administrative data.

There was a data source there for this study which made this one a little easier to do because there was a clear data point that could be compared at the end of the study.

And the point of this was not to tell you about outcomes but to really the process that went into this.

[Slide: The Chill Plan brochure]

But, the outcomes of this Chill Plan study which has been replicated and held up, was that the women who were given the opportunity to draft their draft Chill Plan had a significant reduction in subsequent misconducts, the serious misconduct that have to be wrote the level of being written down.

[Slide: Changing spaces]

We know there’s sorts of interventions that are in a micro-environmental design, sometimes big, sometimes small changes, big changes are great even little ones can make a difference in terms of thinking of spaces around you, and this could be out in the street thinking about something that you’re going to do on an outside space and also on the inside space.

So we’ve been testing these sorts of changes and they do lend themselves to these sorts of parameters, sometimes really quick little evaluations of whether those changes are making a difference.

[Slide: Soothing sounds]

So we’ve tested colour pallets, we’ve tested changes in lighting, New York city just finished a relatively large scale randomised controlled trial of street lighting.

We’ve tested in different environments, what happens when you introduce soothing sounds, what happens if you introduce nature elements like fish tanks into a place that might otherwise be a place of somewhat high anxiety, by bringing just defuses and lavender which we did a great scientific data, growing data around the soothing effects of lavender.

And so we’ve been infuse lavender into situations and seeing if it makes a difference, and you know, all kind of micro-changes, we have a multi-state project out right now which we’re calling ‘nature baths’, which is introducing really a collection of kind of nature-based elements into environments that have problems with violence, and we’ll know hopefully in a year from now if COVID lets us get out the door again whether those made a difference.

[Slide: Example: Staff wellbeing]

Many of the agencies we work with are interested in the thing of staff wellbeing, and in fact if they aren’t we always encourage them to be interested in the theme of staff wellbeing also.

We remind them that their staff can’t engage well with the clients that they’re serving, the community their serving, we can’t expect them to have healthy relationships with the public if they themselves aren’t healthy, aren’t well.

So this is an example of one of the studies we have from that body of work, this was an officer-training facility in Pennsylvania, and the issue here, these were officers who work in stressful environments, and if you have a stressful job there’s kind of a chronic and long-term job stress leads to, you know, there’s a host of evidence on this, of the negative outcomes that this leads to.

It’s burnout, you know, there’s productivity losses, absenteeism, just this pessimistic way which which they can navigate their work environment, and then there’s psychosomatic diseases to.

So the response there was like this is an issue in our agency and serving the customs of our workforce, we want to respond with an intervention which is aimed at stress reduction, so they compete in the Pracademic centre, these are people in the agency, and are setting out to think about how they want to respond to the issue that they’ve identified within their organisation.

[Slide: Staff wellbeing]

And the innovation that they attempted was they tried a program from the Brain Institute.

It was a mindfulness program, it takes 13 hours to complete over a course of four weeks, so we call this a Pragmatic RCT.

This is a small randomised controlled trial, 56 people on staff, and the purpose was to really just learn about how to do this sort of testing with these staff too, so they can do lots of other wellness testing.

How do we do a randomised controlled trial?

What are the processes involved?

So this was for them their first Pragmatic field experiment of trying something on staff wellness.

And the outcomes here were standardised questionnaires that were administered at baseline and then at follow up to the group that was randomised to receive this mindfulness intervention and the group that was not, it included stress, fatigue, resilience, actually there was a host of outcome measures we were looking at both of the intervention group and the controlled group.

So the point of this one is that that did not come from data in an administrative database, there was no data on these outcome measures.

In fact they did have absenteeism, that was the only one that they had in administrative data.

These other measures were measured that will help them create the instrumentation around it and the data had to be collected as part of this project.

So the first example, The Chill Plan, relied on administrative data, no-one had to get into the business of data collection, for the second one new data had to be created for the purpose of that study.

[Slide: Tests of new technologies]

So while we’re thinking about how to do these kinds of tests in our business places, our workplaces, I want to just kind of point out other sorts of studies, other sorts of topic areas that can very readily lend themselves to evaluation research, so that you’re introducing something new.

We’re always ask you to press pause, we won’t slow you down, but think about how to wrap around data collection exercise into this.

So new technologies, if you’re ever adopting a new technology really lend themselves well to experiments, they can be ethically conducted and you can do them on a relatively close timeline.

So we for example, have been doing lots of tests on, you know, virtual reality is growing in use in the US and other parts of the world.

There’s an agency thinking about introducing virtual reality, we’re working alongside them to test whether their technology is actually improving outcomes.

[Slide: Many VR applications]

And there’s lots of different application tools we’re testing, and you could be doing this too, VR is being used as a training tool, as an incentive, as a way to communicate information.

Think for example if you have a youthful group that you’re working with, you can either sit there and tell them something, blah, blah, blah, blah, blah, or make them read a handbook of blah, blah, blah, blah, blah, or you can put them into an immersive headset experience and you’re much more likely, they are now a captive audience in the headset and much more likely to consume the information that you’re feeding them.

Mindfulness applications if you have staff in stressful job situations, five minutes of a VR headset taking them scuba diving or into an ocean scene can really bring things down, and bring down the stress levels.

And we’re also building out a whole new mind of experiments looking at VR applications for people with substance-use disorder.

[Slide: Appt reminders, medication adherence, the unhoused]

Another technology that everybody’s using is the kind of smart phone applications, this is regular for applications right now.

If you’re going to the dentist they’re sending you an appointment telling you to show up tomorrow at three o’clock.

I think a lot of agencies are moving in this direction, we were really one of the first to start in the US testing, these kind of really quick – do this quick test on even text messaging to get people to show up for their appointments, if they have to take medication and they’re people who, where’s there a public benefit of keeping them on their medication, if it’s anti-psychotics or if they have some other medication that they need to take, it might be a really good social value in keeping them connected to their medications.

We did a study on medication adherence with a cell phone check in that really pushed up compliance, adherence with their medication.

And how, this is one in Seattle, we’re helping unhoused families move into appropriate housing more quickly by staying constantly in touch with them through text messages.

There’s lots of ways that you can do this, Snapshot is one of our first ones which was someone from probation in Oregon, if someone didn’t show up for their probation appointment an arrest warrant would go out to them, or if they didn’t show up they would be required to – if they didn’t show up a couple of times, but if they didn’t show up for even for one appointment there would be a mandatory, very brief one-day sanction or two-days sanction so they wanted to avoid the no-shows, and use text message reminders.

And there was a dramatic reduction in the no-shows for that agency which was really quite consequential because the alternative was something quite draconian.

[Slide: Line of sight]

We have been encouraging agencies to just think about space, and think about these kind of micro-changes, and this really belongs in kind of our micro environment one.

This is an example of a line of sight study we started in Canada and then took it to Mexico too.

We were working with communities that had seen a rise in business crime, so these are convenient store crimes.

Think about with your local, you know, we have 7-Elevens they’re everywhere, you’ll have your own kind of corner store, the sorts of crimes that would happen are they’re coming to steal a cell phone or alcohol if they sell alcohol there, certain things are just kind of routinely taken from these stores, and often they might involve a weapon.

So in Canada it was knives, in Mexico it was knives and guns, but the idea was how can we make sure that the staff in these stores are safe, because if we’re keeping the staff it means that people aren’t committing these crimes there, which means even the community who comes into these stores safer too.

And a very simple intervention was called the line of sight intervention, so if you’re passing the store, often these stores are littered with kind of the cigarette ads and beer ads, but what happens if you just create a window of space on the glass that provides a direct line of sight to the cashier, or the station in that store that’s causing those problems, those are just simple interventions, line of sight intervention.

[Slide: Strategies for children at risk of poor outcomes]

Some of these incidents can be really small deals, just take off some of the ads and put a red frame around it and you’ve got yourself an intervention.

Some interventions are big deals, sometimes they just aren’t, they can just be these small things but should still put a test around them to see if they’re working.

So this is meant to be a trilogy, by now we should have been able to show you part three, but because of COVID number three was shut down.

This is a suite of studies we’re doing in a community in Queens, Queens New York, which is a high poverty in this community, really high poverty, lots of risk factors in this community, and we’re really focusing on children who are at risk of poor outcomes.

And the first study was really just how do we engage parents, how do we get parents to be more engaged with their children and making sure that they’re in school and on track.

The kids were all falling behind, not doing anything in school, how do we get parents more involved?

We ran through the first two, the third one which is actually my favourite, we’re about three quarters of the way through it, was chronic absenteeism.

So working with schools where about a third of these students are missing at a least a third of the school year because they just show up, so there’s major truancy problems.

And the intervention there was really how do we start resolving this with an intervention focused on parents?

Well stay tuned, we hope the results of that will come out soon.

But again it was integrating with the community, identifying the pressing issues for them, coming up with strategies and then working with them to test those strategies.

[Slide: Failing cheerfully but responsibly]

Something that I always want to talk about when we talk to our partners is the importance of trying, because we’re only going to move the dial, move the needle to improve outcomes for the communities we care about if we are trying something new, and if we are testing the programing that we’re using in our communities.

So it’s very important that we also make it possible to fail cheerfully.

And it seems strange like the idea of failing, why should we celebrate failure?

The idea is that there a good responsible ways to fail.

And there’s a quote from somebody whose name I can never remember who said, ‘nothing important can ever be done without the ability to fail cheerfully’.

So the idea is if you aren’t doing anything – if you’re doing something and you’re really doing something important every now and then you’re going to fail and you can learn from that and move on.

So this idea of really embracing failure but doing that in a way that’s responsible and for us that means keeping data around something.

If you’re trying something that’s fine, if you’re working on programing making sure your programing really is serving your goal.

Keeping an eye on data that’s in a timely way, so if something isn’t moving in your direction opportunity to modify it or shut it down and try something else.

And then the willingness to share that information, we find so many agencies when something doesn’t go their way they kind of pretend it never happened, and what happens then is that sort of approach, those sorts of strategies keep alive, they never die because no-one’s willing to say hey, this doesn’t work.

So the willingness to point something out when it doesn’t work and they’re willing to be responsible.

[Slide: Letter to notify visitors about prison rules]

These three are all three examples of studies that we were very enthusiastic about going into but just didn’t work, and we are so grateful that those agencies were working with us and keeping data, because they all thought these were great ideas and they would still be doing them today because they had face-validity, they seemed like reasonable things to do.

The first one was a notification to family strategy about contraband coming into facilities, they wanted to just put people on notice.

Well I’m not going to have time to go through the details, but it backfired.

It backfired for that agency.

In fact they were so confident about that that was the right thing to do that they made us replicate the study.

They replicated the study in a different part, different part of the state, and had exactly the same outcome that this is not working for you stop doing that, and they did, and they reoriented to something else.

The one in the middle is really a study of incentives, trying to get people to comply with medication adherence.

This is for a population of people with severe mental health disorders.

And it turns out that they could pay people a small amount of money was really incentivizing.

People would take their medicine even for ten cents.

Fruit, but it was very complicated for the agency to administer this, so there were all sorts of logistical headaches, so they fruit instead, and people were actually quite motivated by the fruit except that the fruit kept getting bruised and it was also a logistical nightmare.

Well the candy was really easy, once they administered the candy as an incentive, and that was a good exercise of learning, like they learnt what not do, and the next step of this is to give people a choice.

Don’t give people a candy, give people a choice of a candy.

People are very motivated by choice, it’s very respectful, it’s like I can choose to have a red candy or a blue candy or this candy or that candy, might be more motivated than if you tell me what I’m going to get.

And the last one, oh near and dear to our hearts too, was this was a really big deal study.

We worked with a state agency to change how they paid out cash assistance to needy families.

They saw a big issue in their data and that people really struggled on the front end.

They needed to go on cash assistance, I’m not sure what it’s called locally in Australia, but they needed to go on that assistance from government, but the first cheque only came in a month later and these people were not able to get jobs often.

Some small hurdle was in the way of them even starting to get to towards success.

They needed a uniform for the job, or they needed gas in the car to get to where they needed to be, and they said oh, how about instead of paying them a month from now how about we break it up into weekly payments, and in fact they did a study from Harvard that has shown, from accounting, a little study, that actually counted that pay weekly rather than monthly had been outcomes and fewer economic crimes, because people would never have that kind of level of need because they we kind of meeting it out slowly.

Well it seemed like a great idea, there was lots of good evidence to suggest this was a good idea, but the agencies didn’t just roll it out they tested it with us in a randomised controlled trial and we were only a few months and we shut it down because what they found, because along the way we all sat and listened to people and learned from the field what’s going on.

Even though this sounded like a great idea, even though there was an evidence-base suggesting this was a worthy idea, a few months in we recognised that many people in the study group were not able to make rent.

Then sometimes lumpy payments, those monthly payments can make a big difference to people who have lumpy things that they have to take care of, and with the state was able to do was learn from them and say okay, what we can do now is have a small cash infusion on the front end but then go back to monthly payments because it helps our public thrive, again for their community that was a better approach.

[Slide: Use your data to answer questions]

We also really want to encourage our partners to use their own data to answer questions.

We often find that there’s often data sitting there that’s going unused and that their data very quickly, put a smart data scientist in front of this data and suddenly you have questions that people have had for years that are suddenly being answered.

And I’m going to take you through kind of a silly example of that, but the lesson applies much more broadly.

[Slide: BetaGov Hello, Pracademics!]

We recently had Halloween and we did actually have trick-a-treating in my neighbourhood because our COVID numbers are lower now.

And last year for Halloween we put this out for our Halloween Hello Pracademics newsletter, and what had happened we were doing some work with policing organisations in Canada and Mexico and in the US, and people kept talking about the Luna hypothesis, and how they would think about officer manning or planning around the time of a full moon, that if there’s a full moon there’s a host of negative outcomes associated with that and they wanted to be ready for that increase in mental health calls, or the increase in public service calls that comes around when there’s a full moon.

And the officers were really convinced that the Luna hypothesis and were really convinced by this.

So we asked them can you just see, you have your data, you have your incident calls, can you just see if this actually holds out.

We know the phase of the moon, or have the data on that, and you have the incident data let’s just take a look.

[Slide: Moon Phase and calls for police service]

So in three countries we quickly teed up a Luna hypothesis study and there’s multiples of these, and what we found if they just looked at their data and we’ll show them exactly how to ask questions of their data, there was no relationship at all.

And in fact, it is still one of our least popular studies with police because they’re still convinced it’s true, and we keep having to take the new agency’s data and look at them too.

So we’ve repeated this many times now to demonstrate that findings in agencies for which this does hold, but so far we haven’t.

[Slide: Even a simple dashboard can be transformative]

But the idea there is it doesn’t have to be a big deal, use your data, ask questions of your data.

Sometimes you can ask questions just by letting yourself be curious and then doing some simple testing within your own data.

Another thing, and this is becoming so easy to do, so much easier right, thanks to lots of software tools that are already out there.

We often do this by producing our software, we’re kind of creating our own software for agencies we work with, but often even existing tools go a long way.

See if there are ways to dashboard if you’re on to really doing it, simple dashboard and process measures, a simple dashboard on case characteristics, if you have outcomes data already in your database get it dash-boarded, so that in real time your staff can start to see what’s going on.

And we’ve seen amazing transformations and reforms happen when that delay of information isn’t there, that people just think real time to see what’s going on.

[Slide: BetaGov’s role in your research]

So this is how, just to take you through a little bit of nuts and bolts, this is how BetaGov often is involved in research, and I say BetaGov’s role in your research, I want to start by saying our role can be many things.

We often talk about being a group that does to to next, and remember when we could go to restaurants and if you went to like a fancy restaurant you have this opportunity of starting with a soup all the way through nuts at the end and everything, – you’d have either the seven course meal or just the one entrée.

So we could be – have no role in your research other than saying we are champions for this idea of learning and that is our role.

Our role is the chair-lead you and say go at it.

Get in touch with your data, collect data if you don’t have any, empower your people to ask questions.

But if we are collaborating together, and again, even within our collaboration it could be a tiny role or a more substantial role, it really depends on the project, this is what our role can be or it typically is with a collaborating agency.

[Slide: Design]

So we start at the beginning in design and help with that design.

So we start to think about drafting the descriptions of what this intervention’s going to be, and what the analysis plan might look like.

What is feasible?

What sort of study can be done here and what that analysis can look like?

But before we even get there we like to make sure that we understand the study in context, that means we have a team at BetaGov and we do the literature reviews.

What else has been done?

What do we know about this thing, or things like it, right, just so you know that you also have the benefit of knowledge of what’s come before you.

Not to say hey, go do that, but just to make sure that you’re informed about any learning that’s happened before.

We also like to help to prepare scripts for staff and study participants, and what we mean is if you’re starting to do an evaluation or test something, you’re trying a new innovation, how do you talk about that to your colleagues, how do you talk about that to study participants?

There are more helpful and less helpful ways to do that, so we like to even give some kind of template scripts, and hey, this is a productive way to talk about research within your agency.

We also, if there’s surveys involved, sometimes there might be a reason to collect that sort of information, a pre-survey or a post-survey we can help in the design and the collection of those surveys.

And then there’s also kind of an initial Pracademia training.

We have a one-hour where we on board our new pracademic partners, there’s no cost to be in Pracademia, you would come to us and we’d take you through a webinar and share a little bit more detail about how this BetaGov model works.

And then if you’re working with us doing an actual pilot together there would be subsequent touch points to make sure that you are really having, if there needs to be any skills transfer that you are getting this help along the way.

And then also very importantly, ethical issues are paramount so that can show if your agency has any ethical review protocols that’s specific to them, helping you prepare documentation for that, or whatever the ethical reporting requirements are to make sure that you’re always being mindful of that and that everything is being complied with.

[Slide: Analytics]

On the analytics side we are available to help with analytics and also to troubleshoot with any implementation issues that arise.

We always encourage our partners to check in with us.

We’ve done so many pilots now across the US and in other countries, it’s often implementation issues might come up in a jurisdiction and if they haven’t had that much experience it might seem insurmountable or it might be sort of complicated to think through how you might resolve that.

We’ve had lots of experience doing this so sometimes we can help you troubleshoot your implementation issues, that grow from the analytic side but also then with an implementation side please reach out to us and keep us informed, we’d love to help you think through those issues.

If you’re doing a randomised controlled trial we have a BetaGov randomiser that we created for Pracademics, it’s easy to use, it helps you make sure that you have a balanced study, so a higher quality research project, so you can use ours, you can use yours, whatever makes sense would help you think through randomisation.

And research issues that come up, something’s going to happen, life happens, it’s a pragmatic field experiment or field tests and pilots, things are going to happen we can help you think those through.

And then we assist with data analysis, some people we’ve worked with have very large research shops.

They are perfectly capable of doing in-house analysis themselves and that’s what they want to do and that’s absolutely great.

More often than not the agencies we work with prefer if we have a data – we have a data sharing protocol, secure transfer of data, we are very careful about how data is managed and we did outcomes analysis for them.

It’s absolutely up to the partner to decide.

[Slide: Dissemination]

And then we want to make sure that others get to learn from you.

We help with dissemination.

It’s really important that both the good news and the no so good news be shared.

But we do do with agency partners is recognise that some agencies are risk adverse.

If something didn’t work out they don’t necessarily want to world to know about it, so what we say is well, how about the opportunity to anonymise.

Some agencies are very proud to say hey, we tried this and it didn’t work and articulate that, others prefer to say hey, we tried this in the western part of the country, they don’t name where they’re at, but we do allow the lessons to be learned, and that’s something we really want to kind of encourage within our network, getting the message out so that others can learn from you.

We like to announce the work that you’re doing on social media, post updates, we’d like to prepare those snapshots that I showed you, that the names, the authors are known those are written for them but the names are the Pracademics, the Pracademics are credited with the learning that has happened in a one-page snapshot which we have found is the magic number of pages.

Long research reports rarely get read by practitioners in the field, and often by decision makers themselves.

Now the research teams might do that, the analytical units might do that, but a lot of people where there’s great value that they get that information just don’t because of these lengthy research reports that we create.

We tested the magic number.

The magic number is one.

You are most likely to get someone to click on the research and actually look at it if you have less, so less is more, so we just put all the results to a one-page snapshot so that we maximise the number of sets of eyes that will get onto your research, the number that will consume what you’ve learned.

[Slide: Expectations of Pracademics (you)]

So that’s what we do for our Pracademic partners, what about our expectations of you as a Pracademic?

If we put the time into working with you we want to make sure we get to a good – a product that we can learn from, an intervention experience for these pilot tests that create value into creating knowledge.

[Slide: Your role]

So your role, we want to be able to count on you, and this sort of means from your end, first of all is really being the fountain of ideas, right, so bring us your ideas, think about whether they are feasible, can this be done?

When we’re thinking about feasibility we really have to be in conversation with you.

We can go back to our drawing boards and try as much as can to think about whether this is viable and feasible for you, but you know more about your agency, you know more about your community, you know whether this can actually be lifted, so that front end it has to be collaborative if we think about what is feasible.

We also want to make sure that you’re kind of contributing to the design, we all have the academic team on track with you to help you through this and to weigh in on whether the ideas that we’re giving you are ones again that are feasible.

We want you to review the materials we’ve prepared for you and approve those study descriptions is the study that we’re designing with you one that is viable.

And then help us understand the human research oversight issues that are required in your agencies.

We know what’s required of our university, what is required of your agencies, something we’re going to have to rely on you to help us navigate.

And then we’re going to ask, if we get to launch and we have a pilot that’s active we do believe in nimble research, in fact we support agile research, so that’s research that can change along the way, but we do ask that you not change the trial design or the pilot design or change the methodology without checking in with us.

And the reason we do that is we want to keep data around that change so when we do outcomes analysis we can be responsibly reporting on what actually happened.

And then of course communicate with us, we love what you’re doing, we want to know about it, just let us know how things are going, and then again let us know if implementation issues are coming up we want to hear about them.

And finally, if we’re doing data analysis at the end, we really want to make sure we have strict protocols over data sharing, we want to make sure that you comply with our protocols to make sure that we never violate privacy of the people that you are working with, and we can share with you our protocols and we ask that you comply with our protocols around any data sharing that happens.

[Slide: How it works]

So in a nutshell this is how it works, you submit your ideas to us, we vet it with you and your leadership to make sure this is something that’s feasible and viable, it’s a political will to proceed.

We’re happy to also help you in speaking with your leadership, sometimes it is helpful to have us with you or bring us in after you’ve introduced the idea to kind of continue that conversation with your leadership.

You know there’s an introductory one-hour webinar where we talk a little bit more about the process in more detail, and after that you’ll be assigned a BetaGov team which works in much greater detail with you to start designing the template of your pilot, what is this pilot going to look like?

And with your BetaGov team in place, once we clear ethical review and have an agreement on what this is going to look like off you go, let the learning begin.

[Slide: The result: a network of public-sector innovators]

And our goal here really is just to create a global network of people who are working for public good, working on something that has the opportunities to move the dial in the service of the public interest, and so we’re really interested in partnering with those kind of social sector innovators who want to create knowledge in the organisations and share with others because we have so much to learn from each other.

[Slide: Big idea? Small idea? Innovate with us]

So whether it’s a big idea, some of our pilots have been really huge overhauls for our partner agencies, and some of them have been small ledgers and we love all kinds, whether it’s a big idea or a small idea we hope you’ll innovate with us.

And with that said I’ll hand back over and stop sharing and hand back over to you.

[Victoria State Government]

The beauty of the BetaGov model is its adaptability and innovation. As Professor Hawken noted: ‘with a little help, ANYONE can do research’. The session highlighted the importance of undertaking fast assessments of program effectiveness, to help continually improve practice within our workplaces. 

The BetaGov approach recognises that the people closest to the problems are often the closest to solutions, and that nurturing and supporting local expertise and knowledge can go a long way to ensuring the right problems are being identified and addressed effectively.  

Other key points from the presentation included:  

  • there’s always a case for maintaining data 
  • it’s important to focus on putting research into practice 
  • determining what success looks like and evaluating programs is important
  • we need to learn how to ‘fail cheerfully’ (but responsibly). 

There are multiple opportunities to adapt this model across a wide range of areas and projects here in Victoria. BetaGov has been successfully utilised in other countries to support practitioner-centred innovation and testing across government sectors including corrections, courts, police, education, health and social service delivery – and has the potential be used by Victorian ‘pracademics’ to measure the effectiveness of the work of government, councils and communities in new and innovative ways. 

For more information about BetaGov, please email communitycrimeprevention@justice.vic.gov.au (External link)