RTO Superhero 🎙️ Empowering RTOs to Thrive!

Assessment Alignment: Valid, Reliable & Audit-Ready

Season 5 Episode 12

In this powerful episode, Angela Connell-Richards sits down with assessment expert Maciek Fibriek to unpack one of the most critical topics under the new 2025 Standards—assessment practices and audit readiness.

Together, they dive deep into:

✅ What’s actually changed under Standard 1.3
✅ Why fit-for-purpose assessment means contextualising for your learner cohort
✅ How to test, validate, and align your tools to ensure compliance
✅ The role of AI in developing assessments—myths, risks, and best practice
✅ Real-world audit insights and red flags to avoid
✅ Why consistent validation and a culture of compliance are key

Maciek shares practical strategies, client stories, and common compliance pitfalls, including what ASQA is really looking for when they review your assessments. From AI-generated tools to LMS adaptations, no stone is left unturned.

Whether you’re preparing for audit, re-registration, or just want peace of mind—this episode will guide you on how to create assessments that are valid, reliable, and aligned to industry outcomes.

Send us a text

 Join host Angela Connell-Richards as she opens each episode with a burst of insight and inspiration. Discover why compliance is your launchpad to success, not a limitation. 

Connect with fellow RTO professionals in our free Facebook groups: the RTO Community and RTO Job Board. Visit rtosuperhero.au/groups to join today. 

 Ready to elevate your RTO? Join our Superhero Membership community and gain access to expert resources, training, and personalised support to help you thrive. 

 Discover ComplyHub—Vivacity’s AI-powered compliance management system. Designed to streamline operations and give you peace of mind. Book your demo today. 

Wrap up with gratitude and guidance. Subscribe, leave a review, and join our community as we continue supporting your compliance journey in vocational education. 

Support the show

Thank you for tuning in to the RTO Superhero Podcast!

We’re excited to have you join us as we focus on the Revised Standards for RTOs in 2025. Together, we’ll explore key changes, compliance strategies, and actionable insights to help your RTO thrive under the new standards.

Stay connected with the RTO Community:

📌 Don’t forget to:
Subscribe to the RTO Superhero Podcast so you never miss an episode!
Share this episode with your RTO network—compliance is a team effort!

🎙 Listen now and get ahead of the compliance changes before it’s too late!

📢 Want even more compliance insights? Subscribe to our EduStream YouTube Channel for our FAQ series on the New Standards for RTOs 2025! 🎥

🔗 Subscribe now: EduStream by Vivacity Coaching

✉️ Email us at hello@vivacity.com.au
📞 Call us on 1300 729 455
🖥️ Visit us at vivacity.com.au

Angela:

Yes, travelling around the country. Still, matek was just telling me he lost a wheel of his car, so he's had an interesting week.

Maciek:

It's been good.

Angela:

Yeah, yeah, okay. So let's take a deep dive in. Well, first of all, we'll look at so our focus point is going to be around the revised standards and what's changed and what hasn't changed, which there is quite a bit that hasn't changed but also what we're seeing at audit. So what's happening with assessment tools in audit, and then also having a look at the importance of aligning assessment practices with industry outcomes, which is what the whole point of these new standards was about is about how we aligning what we do with what industry needs. So let's dive in. So first one is 1.3. Standard 1.3 is assessment practices must ensure validity, reliability, fairness and flexibility. Let's have a look at what this means within practice. What's different from where we were before?

Maciek:

What are your thoughts on this? Maychek? Realistically, whilst the wording has changed, I really think the intent has not.

Maciek:

That's right, it feels the same. Yeah, and look, the simple statement is fit for purpose and consistent with the training product. It's it's really 1.8a. Um, it comes to mind straight up. And you know again, if our assessment practices are not aligned to what industry want and we're assessing something completely different, um, then we have a problem, right, because students are going to be assessing things that aren't aligned and therefore are going to go into an industry with potentially a different set of skill sets or knowledge, or potentially not even be competent.

Angela:

Yeah, yeah, yeah. And I think in the past, from what I've seen, is people buying assessment tools and then not contextualising for their learner cohort, and that's where a lot of the gaps are. But also they're not collecting sufficient evidence to demonstrate that the student is confident.

Maciek:

Yeah, there's definitely two parts to that and I think you're right. The one is not checking your tools before. That, I think, is a really critical thing that a lot of people just call it not enough time, aka being lazy or not having the capacity to checking that those tools actually align to the unit of competency, are structured in a way that makes it easy to mark, easy for students to follow, et cetera. But then the second point that you made is really really valid is we're in an amazing part of this journey at the moment where we do have, as you mentioned at the introduction, a chat concept of ai, chat, gpt and other tools, um, where we can take the tools that we currently have and and truly align them to each individual cohort, whether they're for a specific type of industry, um or a particular type of cohort. We're still meeting the requirements of the unit, but making them more as the units, as the requirements of the revised standards say, fit for purpose.

Angela:

Yeah, yeah, really identifying who are your learner cohort and how are you adjusting the training to meet their needs. And I think a lot of that falls in under support services as well. We've done under the new standards. There's a lot about how are you supporting the students along the way and adjusting the training to meet their needs yeah, look one and a perfect example one of my clients that I've worked for many years with.

Maciek:

They. They work across different industries, so everything from warehousing, truck driving, bus driving, etc. And we've actually written units of competency for the same unit, but written materials and assessment tools in the context of those learners. So the same unit, but bus drivers are learning about safe handling of luggage into bins, whereas truck drivers are learning about loading and unloading goods in a trucking context and warehousing people are working about. And so suddenly we've got this ability to focus on the student without spending too much time on saying, okay, it's going to take me a day or two days to rewrite the assessment tools. Well, if we've got the framework, it's only a small tweak, right?

Angela:

Yeah, and how awesome is that? Yeah, and adjusting the case studies and adjusting the um, the content of how they're, you know, managing that as well, yeah, yeah absolutely it's yeah, yeah, and I think there's.

Angela:

I think the other one, where that one often comes up, is first aid, and how are they adjusting first aid for the different learner cohorts and meeting those different needs? So that's where you go, okay, are we targeting, you know, people who work in childcare centres, or are we targeting people who are going to be first responders in a maybe in a voluntary role, or are they going to be just general Joe Blow off the street who just wants to do first aid?

Maciek:

And so we had the same thing. So we actually wrote again a unit for first aid in the space of, like you just said, voluntary first aid as first responders. But then that same unit is also in the Coxon course and so they're on a boat, Two completely different environments. They're still having to learn about first aid. But, you know, doing CPR on a hard surface for two minutes on a rocking boat can potentially be a little bit more typical than doing it in an emergency situation versus a childcare centre. So, yes, you know, these are all things that we can do. Which begs the question and we started talking about this earlier but that context, transferability of skill, which then leads to a whole different set of problems because units define the outcome but they don't define the context, and that's where RTOs sort of fit for purpose. But then how do we look at transferability of skill and also credit transfer, when someone's done it in a completely different environment and yet we're still forced to do credit transfer?

Maciek:

so yeah there's two sides to the discussion that need to be had, and especially the whole um sort of review that's happening at the moment about the structure of qualifications and and training packages and so forth. It'll be quite interesting to see how that works.

Angela:

Yeah, yeah. So one of the main things that I've seen that is different in these standards is the testing of the tools, and I've had a lot of people asking me so what does that mean, Angela? What does testing mean to you, Maychek?

Maciek:

to you, maychek. I'm a believer and one thing you know when communication at its basic is words, all have meaning. But whether we're calling it testing, whether we're calling it what was once flagged, pre-validation or valid, you know, ensuring that the tools are valid at the end of the day, ensuring that the tools are valid At the end of the day. For me, testing is firstly making sure that the tools again are fit for purpose. But just doing a pre-use test right, asking your team to read through and undertake the questions so that in a simulated environment they can actually see do these tools actually fit the intended use right? So doing testing, asking industry to potentially provide some guidance, is this what we're doing? I mean, a lot of that should have been done through that industry consultation process and you personally knowing industry because you guys are the experts, so a lot of that should already come in from that perspective in the tool development stage. But it's just about testing it, making sure that they are actually just doing what they're designed to do.

Angela:

Yeah, yeah, yeah, um, the way I've been explaining to people is, um, and what I recommend is your trainers should test the tool, they should complete the assessments so that they fully understand how the assessment is structured and then how to, because if you know, like if you start with the end in mind, and if you know what the assessment's going to be, your delivery of training should lead to that.

Maciek:

Yeah, so, and I think all assessors should complete the assessment that they are going to be delivering yeah, yeah, there's two things that I'll say to that as well, though is one if the person that writes the tools is testing tools, then there could be some bias or some document blindness, as I call it, yep. So that's something also to consider from that perspective. But the other and I had a second point that will come back to me in a second but, yeah, it's just making sure that the tools really, you know, are written in a way that students can understand. Okay, because it's all about the student right and making sure that it's easy to follow.

Angela:

Yeah, yeah, and I'll go back to that what you said if the person who wrote the tool is completing the assessment, get someone else on the team to complete the assessment.

Maciek:

so testing to me is yep so I was just going to say.

Maciek:

The other thing is which often a lot of people don't do is make sure that your learning material aligns to what's also in your tools right, so that if your assessment tools have been written or purchased separately, your tools right. So that if your assessment tools have been written or purchased separately, make sure that the material actually covers what's in your assessment tools, especially at the lower level aqf, so that you know you're not suddenly asking a question. That if the students have gone on the journey of reading the material been listening in class or online and in a webinar that when they get to the assessment task it's not unfair because they've actually been taught what is being assessed.

Angela:

Yeah, yeah, and they should have a thorough understanding of the content by the time they get to assessment. Yeah, yeah. So that's from my point of view. I see the best way is to actually test the materials, is to complete the assessments and then from there then you build out how you're going to deliver your training, and I think anybody who's hiring a new trainer, that should be the first thing that they get the trainer to do is complete the assessments.

Maciek:

Yeah, and it shouldn't take them long if they're the industry expert right.

Angela:

Yeah, that's correct. Yeah, yeah, and as a new trainer, I think that's a brilliant way. Like I know, I've worked in RTOs as a trainer and often I got the assessment tools just before I was walking into the classroom and I've never seen them before.

Maciek:

And it's a great way to pick up on spelling, grammar flow, all things. You know. If someone fresh is looking at a set of documents, it's it's likely that they'll pick up on those silly mistakes that other people haven't seen yeah, yeah, yeah so but, I've also been uh as a trainer assessor.

Angela:

I've also developed assessment tools along on the fly. So where I, I was in an rto that didn't provide me with anything, so I they had the training materials but not the assessments. So I took the training materials and took the unit of competencies and then developed assessment tools. Um, I don't know whether every trainer could possibly do that, and this was before we had the diploma of um uh, developing, uh, developing assessment tools. I'd come from a background prior to getting into training writing policies and procedures, so it made sense to me, but I couldn't see every trainer being able to do that. So I think we are living in a beautiful world.

Angela:

Yep 100 yeah yeah, yeah, okay, so, um, uh, I I haven't really seen anything else. That's, that's changed, and in our next episode we're going to be talking about rpl and credit transfer, so that it works really well with what. What we were just discussing, so we'll, we'll go through that, but when it comes to what are your thoughts around what makes an assessment valid, reliable and fair, how long have I got?

Maciek:

exactly right. Um, when we're looking at the current standard, so 1.3, we've spoken about 1.4, and again, that to me is no different to 1.8. Um one and two right as as there are at the moment. So again, not really much has changed there. So when we talk about whether it's validity, reliability and so forth, so for me, if the unit of competency asks a student, from an outcomes perspective, that you know they need to be able to do all of these performances and have them underpinning knowledge or knowledge evidence, they become valid when they actually achieve that. There's a lot of discussion about assessment tools being the minimum requirement and therefore adding certain things on that might be required for a specific industry Again, not really the place to be talking about that but from a mobility perspective, as a a minimum, the tool must address the, the minimum requirements of that unit of competency, right? I think we all can agree to that yeah from a reliability perspective.

Maciek:

Um, I think for me it's being able to use the tool multiple times, over and over again, with different people, et cetera, and there's enough instruction, enough understanding to have consistent approaches. So that's from an assessor's perspective, but also from a student's perspective. You know, a lot of the times I see assessment tools where the first, you know, even 16 pages of the assessment tool are repeatable information about the unit, information about these things, the principles of assessment and rules of evidence, and I sort of go by the time students get to the assessment task if they've actually read those first 16 pages, they're sort of lost. And so, for me, having tools that are easy to use, clear instructions, concise instructions, but enough to allow for that reliability to happen Likewise from an assessor's perspective, is key.

Angela:

I've had many discussions and written to the level of the student and written to the level of the student.

Maciek:

Yeah, Correct many discussions with auditors, assessors, quality assessors in the past about reliability, where you know it's a balancing act between providing information at a depth where there's no professional judgment available, but providing enough information to ensure consistency of, of use and performance from a student's perspective, and so I think there is a balance where a suggested answer in the context of is provided and a potential example or two of what is competent performance or satisfactory performance to achieve competence is important.

Angela:

So it's not up to individual interpretation, it's there is a framework that they need to work with.

Maciek:

Yeah, yeah, I agree with that a framework, but also an instruction to the assessor that says that you know, based on the student's past and experience, the answer or actions may be slightly different, and that's okay, um, and that should be brought up in future, either validation sessions or even at moderation, if the RTO is doing that. But to a flexibility to use some professional judgment but then report and record that so that it then can be updated in your tools later on if it's consistently being undertaken that way yeah, yeah, yeah, and I totally agree with that.

Angela:

the way I explain um, when you look at a unit of competency and then you have the rules of evidence and principal assessment, you're basically overlaying the unit with these. So when you're validating the assessment tool, you're really asking the question of the tool now, is the way this is written fair and flexible and valid, and is it current? Is the way the student is conducting the assessment or the assessor is conducting the assessment? And I think the biggest thing is consistency. That consistency that you had discussed, and I think one of the big things that not a lot of RTOs do is they should be validating assessment tools across a range of different students and different trainers and assessors so that you can see that you've got that consistency between different classes and different trainers and assessors.

Maciek:

And what a perfect segue to go to 1.5.

Angela:

Yes, RTOs must retain evidence that assessment decisions are made against the training products requirements. So what I've seen what ASQA is looking at when it comes to evidence and I'd like to know your point of view on this. So I get this question how long are we supposed to keep the completed assessment tools for and in the new standards? I've seen conflict six months, and then I've also seen like seven years and it depends on if you're doing government funding.

Maciek:

Yeah, so my understanding the current standards say six months, right, yeah, from the time from the date that the completion or the competence has been recorded. So in the current standards it's always bizarre because if someone's doing a one-year program, potentially you don't need to retain record of evidence of that person's competence by the time they're halfway through the course, right? But my understanding is from what I've read under the new standards is that new standards require you to they actually fall into alignment with more the ESOS Act than the National Code, where you have to retain the evidence for a period of two years after the completion of the program. Is the way that I've understood it.

Angela:

Yeah, yeah, yeah is the way that I've understood it. Yeah, yeah, yeah. So some of the things that I've seen with audits is and this is something that's come up recently a lot with assessment tools is assessment tools written using CHAT, gpt. So I've seen a lot of people going, oh well, we can write all our assessment tools now ourselves, we don't need to purchase. But ASQA auditors are actually running the assessment tool through ChatGPT to find out whether it's or other AI software, to see if it's machine written, and they also want to know like when they you know well and good use ChatGP, gpt, but are you contextualizing it? Are you, are you just simply putting in the unit of competency and say write me an assessment tool based on this, or are you putting your knowledge and skills into it? What are your thoughts about using chat gpt to create assessment tools?

Maciek:

yeah look, I I've not heard of asco doing that.

Maciek:

Um, if they are, I would we only had that yeah, if they are, then then again, if they're using it as a a checking mechanism, um to to effectively speed up the validation process, but it's. If it speeds up the the process of the audit and therefore saves the rto money based on ASPR auditing fees, no problem. What I do know, and talking about that process, is that there are AI validation tools out there at the moment. I've seen some of them and they're not always perfect either, just like AI is not always perfect. From my perspective, I like to call it hybrid AI right, where we're incorporating our own personal experiences and our years of experience that we've been in the industry with AI and developing a prompting sheet or something along those lines that gives quality output, that allows for a quality product to be produced.

Maciek:

I think if we're not using AI and there's a lot of people out there that say, no, you can't use AI, or et cetera I think if you're not using AI, then you're very quickly going to start falling behind in industry. There's no reason why AI should not be used to develop learning material if it's done correctly. That's the critical thing here. You know, to use, uh, an inappropriate, sorry analogy shit in, shit out, right. Um, if we, if we really don't think about what inputs we're putting into any ai system, we're going to get rubbish out, and so a lot of people that I've spoken to that have tried ai, um, various tools. They don't choose to understand the system first and they just are this rubbish, etc. I, I, I refer to ai as the smartest, dumbest intern that you will ever have. Yeah, and you teach it and contextualize the um, the system to be the um, I guess persona that you want it to be. It can really do some amazing things, and we have produced countless uh learning materials and assessment tools for various providers and we see some amazing results.

Maciek:

Is it perfect all the time? No, right, but that's where the human bot comes into it. Our us as a human to review the material, to do a quality assurance process on it, which then goes to what we were speaking about 1.3, is making sure that tools are fit for purpose right, but the timeframe of taking what used to take two to three weeks down to a matter of days, it's a no-brainer. Now, on the note of ASQA and AI, though I don't believe ASQA have a problem with RTOs using AI. What I think ASQA have a serious issue with is RTOs not having a process or a procedure in place to check that students aren't abusing AI for the purposes of determining or filling out assessment tools, etc. And there's no, no checks and balances in place. I think that's where us we have the biggest concern at the moment and realistically, most RTO providers should have that concern yeah, yeah, and I think, look, I certainly have no problems with people developing assessment tools using AI and chat GPT.

Angela:

I use it every day, all day. So I do, and I keenly encourage people to use it, but it's making sure you are checking the data. As you said, shit in, shit out. You've got to make sure that you've got the right data that's going in there and use your human brain to really review it and identify. Well, what are the? Is this going to be fit for purpose? Is it going to be meet the rules of evidence and principles assessment and does it make sense, like when the student's reading it? And I think the most important part because I see students reading it. And I think the most important part because I see, like what you said about chat gbt, I see chat gbt as a highly skilled research assistant, but they're not they that, that's what they're good at, but they're not good at that contextualizing. So that's where you need to get in and do that. Although, however, my bots learnt a lot about me and my organisation, I still have to review stuff, though you can't just take the first thing that comes out.

Maciek:

Look, I'm not going to give away too much, that most people should already know. But when I use ChatGPT, whether it's for the development of learning materials or assessment tools, the persona that I ask it to take on is I want you to take on the role of a world-renowned vet, slash RTO, consultancy and instructional designer, but an instructional designer who has got over 10 years experience in X industry right. And so by taking on that persona of a consultant, an instructional designer and an industry expert, they suddenly have a persona that goes right. I know how I need to think. And then we start doing the other prompts right, and so once you've got that down Pat, then the rest prompts right. And so once you've got that down pat, then the rest follows right. But it's making sure that they've got the right framework at the beginning, because otherwise you do get rubbish.

Angela:

Yeah, yeah, yeah, and I think we've still got to put in our skills, knowledge, experience in that industry sector. That's the crucial part, and I think one of the other things that is really good about CHAT-GPT is personalised learning, where we can contextualise the training and assessment materials to the learner cohort by doing exactly that.

Maciek:

Creating a prompt of this is our typical student, this is their background basically your training and assessment strategy, aqf levels, language issues, uh, terminology, um even ethnic backgrounds, um asking to do better case studies that are appropriate.

Maciek:

I mean, it's really um invaluable, what, what tools we've got available to us. So, yeah, but the other thing that I would say is, when you're using it, don't just accept what it's saying, as, again, someone who uses it daily in our business, we see that even chat GPT has bad days right, and what you say to it one day and then ask the same instructions the next day, it's potentially learned different things or it's been updated, and so you've got to always be checking and amending your prompts, potentially to to make sure that it's still coming out with good materials yeah, yeah, yeah, and you have to yeah, yeah, yeah yeah, and one of the things I recommend if you are using ChatGPT is you can now put projects together so you could have a project that could be a certain industry sector and then that way, it will only take the content from that prompt.

Angela:

Yeah, yeah, yeah, awesome. I know these are areas that we're both passionate about AI and I really love training and assessment as well, and so we've sort of covered some of the things around. The most common assessment compliance failures, and what I see at audit and then I'll ask you, maychek, is what I see at audit is and I've touched on already the main things that assessment tools are non-compliant with is they haven't been contextualised for the learner cohort, they haven't been validated, they don't address the unit of competency requirements and the assessment conditions, or they've simply spewed out the performance criteria of a unit of competency and turned it into a question and answer assessment tool based on the performance criteria. I think, uh, where, if you're going to be developing your own assessment tools, make sure that you are fully addressing all of those areas. When you're looking at um rules of evidence principles, assessment and the unit of competency, what are your thoughts around? What are common areas, errors and failures, rules of evidence, principles of assessment and the unit of competency. What are your thoughts around? What are common areas?

Maciek:

errors and failures, look in generally, or using AI, or In general everything.

Maciek:

Look, I think in general, a lot of people don't, like we spoke about at the beginning, don't contextualise the materials.

Maciek:

I've seen a lot of tools that also jump around a lot or use a lot of opinion or past experiences of the developer, where you sort of sit there and go, okay, this is nothing to do with the context that we're delivering in. So I think it's really important that when you purchase materials, that you do the checks again, making sure it's fit for purpose. It goes back to that simple statement, and so I see a lot of times people buy it, they implement it, but don't actually give it any thought as to how that context works. They don't analyze the questions and the depth of the questions, so a lot of the time they don't take into account AQF level and dimensions of competency. So there's a lot of those issues. When we start looking at developing assessment tools using AI, again it's that prompting that needs to be there, but then also cross-checking, asking it to reverse, map or map the materials that it's being produced, because, again, it doesn't always get it right, and so using the tools that we've got and understanding the use of those tools is really important.

Angela:

Yeah, yeah, yeah, I think you've wrapped that up very nicely together there. I think, like the top three non-compliances I see consistently is trainers and assessors, assessment tools and training and assessment strategies. Any one of them can trigger one or the other to be non-compliant. So if you're starting with a good assessment tool that you are validating, you are testing, you're going to have much more better success at audit, but also with student completions as well.

Maciek:

Yep 100%.

Angela:

Yep, yep, yeah, okay, so let's so. We sort of discussed what are some of the red flags that ASQA are looking at within assessments. Have you got an example of what people should be focused on when they're preparing for an audit with regards to assessment tools?

Maciek:

Yeah, just the biggest thing is obviously when we're going into an audit whether it's initial application or monitoring is making sure that the people that know the tools are there. When we're talking about initial applications, we're generally providing the tools that are not completed, so the raw or the unanswered tools, because we don't have any students yet. So at initial application, it's always advisable to have the person that's either developed them or the industry expert that's there to support that, Because often a CEO may not be a trainer and assessor. There's no requirement for the CEO not be a trainer and assessor. There's no requirement for the CEO to be a trainer and assessor. So, yeah, it's really about making sure that whoever is responsible for administering the assessments within the organisation understands them. I have been in audits in the past where the client has been unwrapping the textbooks out of plastic on the day of the audit right oh dear, and so that's not a great look.

Angela:

It should be doggied with Post-it notes all through it.

Maciek:

Yeah, look, 100%. You know that's the whole purpose, right, that we know our product when it's a monitoring audit or re-registration audit, where there are completed tools. Again you know it, we should not be using the audit as the point of time where we're checking our systems yes, yes we often.

Maciek:

We often do because we haven't had time or we've been lazy or there's been other priorities. Realistically, the audit in a perfect world. We should be going to an audit confident that what we've submitted is compliant, right, if your validation approach has worked, if your quality assurance mechanisms have worked, there shouldn't be a problem. But if you're going to an audit, check your materials before you submit them to the auditor.

Angela:

It's not hard. Yes, yes, definitely.

Maciek:

We'll have an opportunity where auditors are telling us who they you know. Four years ago we got told pre-COVID. Five years ago now, we didn't know who was being audited. Right, what student files. They were just told on the morning. These are the student files I want to look at. We now know. There should be no reason why you're not checking those files and making sure that they're compliant before we submit them.

Angela:

Yeah, yeah. And then it's nice and easy to follow through. Follow in the documents, yeah, yeah, um. And I think think the other thing is don't leave getting yourself compliant to oh, we've just submitted our re-reg. It should be a process, it should be ongoing, but whenever we have someone come to us who's getting ready for re-reg, we want a minimum of six months to work with them to make sure that they're on track, and that should be what people should be doing. At least six months before you submit, you should be preparing. Well, look, yeah.

Maciek:

I don't disagree, but I also think that we shouldn't be using re-reg as a point of checking.

Angela:

We should be using it annually right?

Maciek:

Yes, it's a point where it's a reminder, but realistically, let's have a culture where we're focusing on our business practices and our quality assurance practices ongoing. Yes, we can use re-reg as a point to remind ourselves, but let's try and have a culture where re-reg comes around, it's like, yep, we're good to go, we know we're confident. Re-reg comes around, it's like yep, we're good to go, we know we're confident. You know we've had this mindset of focusing on compliance for six years, seven years, you know.

Angela:

So let's keep going and what I and I've said it many times before is a culture of compliance. You've got a culture of compliance throughout the organisation. But time and time again I see people who come to us who go, oh, we've just had ASQA contact us and we've got an audit, and I'm like, well, you should have been prepared.

Maciek:

but that doesn't always happen. We're talking about perfect world, but yeah.

Angela:

Yes.

Maciek:

Worst case scenario, 100%. If you're coming up to re-reg and look, some re-regs go through with no audit, some go through with no audit, some go through with an audit, some go with no audit, with an order a year after. So, um it. It really depends, I think, on asper's workload, risk rate, risk factors and an amount of students that you've put through and the type of scope that you've got.

Angela:

That that will often determine whether you're going to get audited at re-reg or not yeah, yeah, and I like I don't see any consistency of how an audit is triggered. So, yes, you've got your re-reg, but you've also got an addition to scope. Like, we have people who put submitted addition to scope and and then, oh, okay, now we're going to audit you, whereas other times they put a submission addition to scope and nothing, they, they don't have anything, and we're always preparing them for that worst case scenarios. If you do go to order, you've got all your ducks in a row and you're ready, ready to go. Yeah, yeah, all right. So thank you very much, may check.

Angela:

This has been awesome talking about assessment tools, so. So my final recommendations is conduct a full assessment review and really have a look at all of your assessments and schedule validation as a regular thing that's in your diary twice, like I used to do it twice a year and I used to bring all the trainers in and conduct it twice a year, making sure that your trainers are meeting the requirements with the background, skills and knowledge within the units as well, and then making sure that you're testing. So my recommendation with the testing is get your assessors to complete the assessment tools. What would be your biggest recommendations, maychek?

Maciek:

Exactly what you said. But also one thing I'm seeing a fair bit of, obviously post-COVID, is people have taken paper-based type tools that they purchased years ago or even a few years ago and putting them into an LMS. And that's fine, we're moving into that digital world great, and that's fine, we're moving into that digital world great. But they are often not reviewing the instructions that have been provided in the paper-based and are copying them directly into it. So I have seen LMSs that says you know, using a blue pen, answer the questions. And it's in an LMS with auto-graded theory questions.

Angela:

Or go to PAGE.

Maciek:

Yeah, you know, whenever you're transferring anything, use common sense, take the time to read or create new instructions, again using AI. Not difficult to create new instructions that actually align to the assessment practices that you've got.

Angela:

Yeah, yeah, you could just put a prompt in. We want to take our paper-based assessment tools and put it into LMS. Yeah, yeah, not hard at all.

Maciek:

That would be my final recommendation.

Angela:

Yeah, yeah, All right, thank you, Matek. Once again, great to catch up with you. Until next time we'll see you soon. Thank you, Until next time we'll see you soon. Thank you, and stop there. We go.

People on this episode