top of page

[AI Series] Building an AI Tool for Financial Compliance With Mamal Amini

Mamal Amini

Mamal Amini is the Co-founder and CEO of GovernGPT, an AI that understands financial sector regulations for the SEC and FINRA. It reviews marketing materials for sales communications and provides real-time feedback at the point of creation. Mamal is an AI scientist holding an MS in computer science from McGill University.

Here’s a glimpse of what you’ll learn:

  • Mamal Amini explains how GovernGPT helps people

  • AI-powered compliance tools for the financial industry

  • What is GPT and what is its impact?

  • Building an AI that acts like a Chief Compliance Officer

  • Mamal’s background in AI

  • Advice for building a successful AI business

  • Considerations for choosing AI-driven software products

In this episode…

Compliance in the financial services industry is essential. It's not just about following rules and regulations, it's also about protecting your clients and your business. With so many regulations to keep up with, it's essential to have a tool to help you stay compliant.

According to Mamal Amini, using an AI tool that reviews content as your team creates it can save time and reduce the risk of compliance issues. An AI tool can identify problems, suggest alternatives, and give consistent feedback to content creators. He shares his journey of creating an AI tool to do exactly that.

In this episode of The Customer Wins, Richard Walker sits down with Mamal Amini, Co-founder and CEO of GovernGPT, to discuss how AI can help you stay compliant in the financial services industry. Mamal talks about AI-powered compliance review tools, GovernGPT, building an AI that acts like a Chief Compliance Officer, and advice for building a successful AI business.

Resources mentioned in this episode:

Sponsor for this episode...

This is brought to you by Quik!

At Quik!, we provide forms automation and management solutions for companies seeking to maximize their potential productivity.

Using our FormXtract API, you can submit your completed forms and get clean, context-rich data that is 99.9% accurate.

Our vision is to become the leading forms automation company by making paperwork the easiest part of every transaction.

Meanwhile, our mission is to help the top firms in the financial industry raise their bottom line by streamlining the customer experience with automated, convenient solutions.

Go to to learn more, or contact us with questions at

Episode Transcript:

Intro 0:02

Welcome to The Customer Wins podcast where business leaders discuss their secrets and techniques for helping their customers succeed and in turn grow their business.

Richard Walker 0:12

Hi, I'm Rich Walker, the host of The Customer Wins where I talk to business leaders about how they help their customers win, and how their focus on customer experience leads to growth. Today is a special episode in my series on artificial intelligence. And today's guest is Mamal Amini CEO and co-founder of GovernGPT. We've had a few other guests in this series and you can check them out including Alane Boyd of BGBO Co. and Gabe Rissman of YourStake. And today's episode is brought to you by Quik! the leader in enterprise forms processing. When your business relies upon processing forms, don't waste your team's valuable time reviewing the forms. Instead get Quik! using our FormXtract API, simply submit your completed forms and get back clean, context-rich data that is 99.9% accurate. Visit to get started. Before I fully introduce today's guests, I want to give a big thank you to Parham Nasseri of Investorcom for introducing me to Mamal, go check out their website at to learn how they turn regulatory obligations into your advantage. So today I'm super excited to talk with an actual AI scientist. Mamal Amini is the co-founder and CEO of GovernGPT which performs real-time reviews of marketing collateral for advisory firms. GovernGPT reviews marketing materials, as if the Chief Compliance Officer were reviewing them themselves. So content creators nothing teams and investor relations sales, business development marketing, they have access to compliance reviews in real time. So by the time collateral is submitted to compliance, it's in really good shape. Mamal, welcome to The Customer Wins.

Mamal Amini 1:55

Hi, Rich, thanks for having me. My pleasure.

Richard Walker 1:57

Yeah, mine too. I'm excited to talk to you. If you haven't heard this podcast before I talk with business leaders about what they're doing to help their customers win, how they build and deliver a great customer experience. And the challenges to grow in their own company. Mamal, let's understand your business a little better. How does your company help people?

Mamal Amini 2:13

We're at the essence of the collaboration tool. So that the marketing sales, Investor Relations teams can collaborate with the compliance and legal teams in a much faster and smoother process. We review collateral in real-time with AI by essentially emulating the way CCO would have given those reviews. Give those judgments in real time to the salespeople who create the content. So that this process is in general a lot smoother, while in the meantime, save a lot of time, make sure content is consistent, and everything gets reviewed. And there's just be your likelihood of churn for salespeople who create the content.

Richard Walker 2:13

So I remember when I was an advisor, now this is over 20 years ago, so things have changed. If I wanted to send out an email campaign to clients, if I wanted to put together a one page sheet on my company, it would take my broker-dealer two weeks, maybe three weeks to review and approve it and then tell me it's wrong or change it. So I don't know if that's true today. What is your product actually doing then to make that process easier?

Mamal Amini 3:19

So we have an AI that's trained on SEC, for example, regulations, how a compliance officer with essentially a review collateral by default, so then it'll flag anything that needs to be flagged in terms of a language that's promissory any disclaimers that need to be added, anything that's cherry picking, if a chart needs to be analyzed to understand if it's essentially adjusting to what the SEC is marketing who wants you to behave as. The best right now in the market essentially how ACA operates, which is about three days which is humans reviewing this as you can get that judgment in real time as content is great and getting creative. And whatever the contents may be as long as it can be converted into text. We essentially analyzing give those judgments in real time.

Richard Walker 4:08

Okay, so like I don't know if this is a fair comparison, but my head's kind of thinking. Is this like Grammarly? Is this kind of like sitting with you and helping you craft a better message?

Mamal Amini 4:17

Yeah, you nailed it. So Grammarly for somebody who doesn't speak English as a first language is perfect, right? Because, like, you just don't think about English all day long, because that's not your first language. Right? I use it all the time. Now, if you're in a, let's say are in an IRA or in a broker-dealer and you work there and not as a compliance person, you don't think about compliance day in and day out. Whereas you want to collaterally create to be compliant, whether it's an email, whether it's a podcast, whether it's a sales pitch or anything that you're taking to the investors, you essentially need that to be compliant. So what you currently do is you send it to the compliance, compliance reviews it comes back to you, you take care of the reviews, goes back final approval, whereas imagine this whole process becomes one round, so you pre-review everything with our AI. And then by the time it goes to compliance, it just gets an approval.

Richard Walker 5:06

Wow. Do you have any metrics on that? Like, are you seeing that 99% of things get approved the first time or what? I'm just curious.

Mamal Amini 5:14

Yeah, it's a matter of pattern recognition. So like off the start, it starts with like, anywhere from 30 to 50%. But then we onboard essentially clients, then we take care of their company attributes. So if they have any library of disclosures, if they have any policies and guidelines, any internal bulletins that they've had before, so we take all that into account, to give the feedback as if, again, their own CCO would have given those feedbacks, and then we take the past reviews, but it essentially becomes a version of the CCO like giving those feedbacks. And then it's gotten to the point of right now, with the firms you've worked with, it's gotten to the point of 90% approval automatically, because it's pattern recognition at the end of the day. So the more in the more knows exactly how to give those comments and reviews.

Richard Walker 5:59

So you guys are pretty early, then. I mean, your product is fairly new, right?

Mamal Amini 6:04

Yeah, we are. I mean, this whole technology is. So the whole was only accessible to anybody outside of open AI and Google only in the last like, year, essentially, yeah, these types of technologies before.

Richard Walker 6:21

So what made you say I want to solve this problem. I mean, other than the enamored with GPT models, which is fascinating, what made you choose this problem?

Mamal Amini 6:30

It's more of a communication with your clients, you pick a section of the market, and you try to talk to them, and you understand that we actually had two different problems that we solved. We have two different products that were live. The last one is live that has active users, which is the CCOs are using that one. Yeah. This problem is something that's really urgent, and it captures the level of value that I believe compliance officers would really appreciate, which is reduced risk at the essence of it.

Richard Walker 7:01

Yeah. And speed. I mean, just the fact that things can get done faster, is amazing.

Mamal Amini 7:08

Yeah, we have just a much faster turnaround time. And obviously, in the meantime, saves time too.

Richard Walker 7:12

Yeah, yeah. Okay. Let me ask you a maybe a tough question. You named your company GovernGPT. And I presume it's to recognize chatGPT. But do most people know what GPT really means? And the impact of it?

Mamal Amini 7:25

I don't think so. But it's fine. I mean, it's the transformer models that actually originally came out of Google and then open AI created the first GPT, and then open a newer versions of DirectX right now, which is, we have GPT4. But all these models, I mean, whether you have usually you have two types of these, you have either GPT type, or you have Bot type. But they're all essentially, it's just how you train them. But they all essentially do the same thing predict language by seeing the likelihood of words next to each other. And most of the corpus of language that I've usually trained on is just English.

Richard Walker 8:05

Got it. Yeah. So I mean, it doesn't make sense. I mean, I think enough people have seen chatGPT it's the it's the buzzword ever we go that they're aligning with it, right. So let me lean on the scientists here. What is GPT stands for Generative Pre-trained Transformers. Okay. Yeah, I can never remember that I knew the generative part...

Mamal Amini 8:26

Pre-trained, the fact that you can pre-train these models without actually having data specific, which is like what used to be before with machine learning models. Yeah, AI is now more accessible to people, because you can have these models that already come with a lot of good biases of how we do things in life. So for example, the chatGPT understands English language really well.

Richard Walker 8:48

Got it. So the machine learning languages are really you train it on over time. But GPT has a pre it's kind of like programming languages, right? There used to be machine and then basic, and then the third level and a fourth level. So they're building on top of each other, right? Is that a fair way to look at it?

Mamal Amini 9:05

It is, you can then fine-tune these models and make it better. Which is one of the things we've done. At the essence of it in terms of like grand scheme of things for everybody, it's sort of becoming software engineering to become like, you can have a faster feedback loop. Right? What used to happen is every time before machine learning, like pre these generative models, you had to train everything from scratch, every time you wanted to train something, I mean, you could have predictive models that essentially use these pre-trained set of fates. But then again, as long as you pre-train them, then you can reuse them. And this used to happen in computer vision for images and objects, or like essentially just like image and computer vision, but this was not the case for language on a broad set of tasks before the access of GPT. It was only done with like specific tasks one at a time for each model. You want to do summarization, question answering these are different tasks in language. So then you would have trained or pre-trained each of these models for a specific task, whereas right now, ChatGPT can do all of them.

Richard Walker 10:14

Yeah, man, I'm so glad we're talking about this. I mean, I know a bit about AI. I've studied a lot of it, but you're helping give me some insights as to how this actually works. So I want to ask another question about training, you've trained the model to be like the CCO the Chief Compliance Officer. And my question is really two things. First one is, as it's processing documents or language for people, is it learning from that? Is it training itself again from that? Or is it only applying what has been trained on to correct the document?

Mamal Amini 10:45

It starts with what's been trained upon, it really depends on the firm to firm if they want, we have a zero-day retention policy. So we don't keep any of the data, but so that it essentially captures it. But while at the same time in anonymizes, every data, it captures the level of judgments that a CCO would give, if they choose that to be the case. So we have clients right now that just use the base model. And we have clients that essentially we fine tune it for them for their specific purposes. The thing is, the results become anywhere, as I said, like with the base model, it starts to save you like 50% time, whereas with the fine-tuned model, it gets to all the way to 90 95%, and smaller firms have been more than willing to actually share and be willing to use it right now. When I'm saying smaller, I mean, under 15 billion assets under management.

Richard Walker 11:35

Okay, so let me ask a different question I was going to ask, if you have a base model is that what you upgrade over time as your product set?

Mamal Amini 11:46

Yes. So we have scrapers built off of regulators website, or any law enforcement action, say comes out. So, one of the things we're doing is like, let's say, with the recent fines that happened with the RIAs that because of hypothetical performance, so like nine firms got fined. Now, any fines that comes out, we essentially take that fine and run it against every past collateral they've had. So that if, say, you had a collateral from five years ago, and you have a hypothetical performance claim, you don't have the right disclaimers, or the right policies, essentially informing all of that, then it can go and find it for you. So tells you like, hey, you need to pay attention to this. It gives a suggestion of how to go about it. But like, the first step is like you have 10s of 1000s of documents to go through. And it's almost humanly impossible to go through that while you still have a lot of other work that the company requires you to do and review and a lot of other tasks that goes on in compliance. That's not just reviewing collateral.

Richard Walker 12:45

Yeah, no, I love that. I love that you guys can be backward-facing as well as forward-facing. One of the things I keep thinking about is, if you train this model really well, it's going to do a great job, right? And you're saying that you don't retain the information that it's processing. So it's not constantly training itself? Right? It's not reading somebody's document and saying, oh, I've learned something new. And I'll apply that to my model itself, right?

Mamal Amini 13:09

It's not the model that gets used for everybody. It's for each individual client, because what we found is, it's entirely irrelevant to us even first of all, we don't do any cross-training. But like, it doesn't even make sense technically, to do so. Because what's unique to one RIA is not like, it's very unique to them and not the next RIA. So like their understanding of risk, and it's always conditioned on a lot of company attributes what are the products you have? Who's your audience? When are you putting out collateral? What is the collateral you're going out with? Is it emails? Is it pitch? Politics is it's RFPs? Like, to what extent do you want to pay attention to each in terms of risk? And how risky you want to be? To what extent do you want to add the disclaimers? Where do you want to add the standards? Do you have your own specific way of doing it? So there's a lot of nuance that goes into this, that just makes it almost practically useless to even do so like the model itself? But the model has that judgment learned for each individual client if they want that?

Richard Walker 14:07

Okay, that makes a lot more sense. I mean, I was thinking about that with different styles of companies. That essentially means you could take a base model and permutate it any number of ways to meet the needs of all your different types of customers? Did their individual models get updated and trained over time with their own content? Or do you maintain a base model like footprint for them too.

Mamal Amini 14:27

So we have a base model, and then each model of themselves is also having, let's say recency bias, for example, that's one of the things we always want to have because, if their understanding of risk changes across time, because maybe they're now grown or they're at a different type of firm, or maybe there's a new regulation that came into effect, obviously, the AI is aware, but at the same time, the CCO is never going to be, essentially sorry, the AI is never going to know this better than the CCO. The CCO knows this best. Right. So if the CCO gives a past review that says like times today versus a review that was given two years ago, we always make the AI try to emulate the way the review was given today. So it keeps learning for that specific firm, right. And then there's like new products that you open up, you have a new investor, you have like new set of investors and like, maybe they can care about different things. So you have your own innovations as the CCO and those innovations, again, to get taken into account for that specific firm.

Richard Walker 15:29

So Mamal, this must mean that this lends itself really well to be a software as a service company, meaning AI models like this, because you need to keep them moving forward. Like that. You said recency bias. I liked that idea. Right bias to what's more recent and regulation and law. So your customers must be looking at you to maintain the model over time to keep it valuable to them. Right. It's not just I bought it, and I'm done. Right?

Mamal Amini 15:55

Yeah. Yeah, exactly. I mean, that's why I've even started with essentially, it's just a subscription model. So it just makes more sense for software companies like us and them themselves too because they can test it out. See how it works, does it even work for them or not? And then, they can essentially get rid of it with any moment they want, right? So it makes sense to them. It also makes sense for us, because it forces us to be far more customer-serving at the end of the day. So that's why we've been looking at our product, we have customers that right now I've seen the fee that which we've been like iterating, week after week after week after week. So if you've been follow like my LinkedIn, probably for the last six months, you will notice that like, I released product like six months ago, and I released product, like last week, and it's an order of magnitude difference. Now, obviously, someone can say, well, why don't the wait, but you know, you can but that's up to you like you don't wait for like YouTube to get updated 10 years from now because it delivers value to you today. Right?

Richard Walker 16:52

Right, yeah, no, you've actually said a couple of really important things, one of which I've echoed on prior episodes. And that is as a software as a service-based company, getting paid monthly, or annually, consistently over time, gives you an incentive as the product owner to keep making your product better. And I think that's a real value to customers. That's why I don't think buying a product once and being done is a good idea. I think paying for it incrementally over time. And watching that vendor continually improve their product, helps you judge whether it's still valuable, if they're not improving, you stop paying them, like you're not keeping it up, I'm not gonna keep doing this.

Mamal Amini 17:30

The motto is make something people want. And you don't do it once and you're done. You just have to keep iterating the world is an ever-changing world. That's the world we're living in, whether it's the regulations or not, that's just like life, life is constantly changing. So you have to constantly iterate constantly improve, and constantly listen to your customers listen to your users. So, stay tuned, we have a next week, we're going to have two new releases.

Richard Walker 17:55

That's awesome. Let me ask you a slightly different question. You have a really, really deep background in artificial intelligence, right. I mean, I was looking at your bio, you've done some deep learning, you've done machine learning. Tell me more about what that background is like and what you were doing?

Mamal Amini 18:10

Yeah, I used to be working at this lab, which is world's largest AI Lab essentially, published a lot of papers with different geniuses essentially, worked with Yoshua Bengio, for example, who's the one of Nobel Prize winners of AI, the Alan Turing Award of computer science, my apologies. I worked on many different spaces from natural language processing to reinforcement learning who was my supervisor. At the time, she's one of the, you know, OG reinforcement learning people in the world. And yes, like, essentially, anywhere from fundamental AI to deep learning to optimizations for deep learning models, computer vision models to essentially make them more robust for reinforcement learning, which is essentially a decision making. I tried to build models and publish papers and innovate there. After that, then I join for big corporate and small startups, really, really large, really late-stage startups at the point that, they've had the thing, the last startup I worked at a billion dollar funding, and we built GPTs there.

Richard Walker 19:23

Now that is so cool. What did you get your degree in? Computer science. Computer science? If other people want to follow your path, what would you recommend they do?

Mamal Amini 19:33

Do something that you find really fun and you're willing to spend hours on it? I think whatever you do is going to be that if you mean AI, then I'm not sure if AI in 10 years is going to be the same AI now because obviously I think AI right now is more engineering than science, which is why I myself went from like, publishing papers and patents to building GPTs. I wasn't working on the fundamental like most recently, right, before that I was doing that. And that shift was like, because that's just what the world needs right now because AI science has reached a point that we really need to democratize access to AI from democratize access of AI to the rest of the world. Yeah. So other industries, like compliance here, like financial services.

Richard Walker 20:23

AI is such a fascinating thing to me, because, like, I'm from the generation that didn't have the internet, saw the internet come, saw e-commerce happen, saw the advent of Google. I mean, when I started developing software for my company, I love to tell the story that if I need to solve a problem, I went to Barnes and Noble, I went to Borders Bookstore, I sat on the floor for hours reading books trying to find answers. Today, you just go on Stack Overflow and say, what's the answer? Or you go to chatGPT and say what's the answer? So it's remarkable how things have changed. But with AI, you can go learn so much on your own without having been a scientist, but yet, I envy your background of all the deep work that you did to really truly understand it. So I guess part of my question is, how much does somebody have to dive into the depth that you've dove into, to be able to build a company like you're building and solve these types of problems? And I know you're biased, but I want you to confront that bias.

Mamal Amini 21:19

I feel like at the end of the day, you always end up saying, like, well, you just got to focus on the next thing, rather than, like, what's been done already, so that you don't really, quote, unquote, validate yourself further and like, constantly focused on like, what you can do to improve. And that's usually been my focus in general, for forever. What can you do to build a company like me, I think, to be honest, you really need to understand, like, what these language models do, at the core of it, what are their limitations? Where do they really, really lie? Because like, for example, in the world of compliance, and legal, they really care about that exactness, right? Like charging, which is really good for language, but it's not really not good to be sure to be relied upon. And that's literally the first problem we solved like a year ago with our previous product, obviously, we have that now to what it was, like, the first product we built was like, how do you make sure you trust AI? Like, because you don't want to trust AI? Like, at the end of the day is how do you do that? Right, you find the right source every time it generates any line. So you tell it exactly. This is where it's coming from. And so that if, let's say, compliance officer wants to validate that they cannot do for themselves, so they have access to that. And I think that's part of it, right? So you need to understand the limitations and its strengths. And then when it comes to building a client, that's on the scientific side of it, and engineering side of it, obviously, like you want to go about how to build it. And there's 1000 blog posts right now that you could look into. And then I think on the business side of it, you really need to understand how much each cost, how much of these models costs, how big are they? How much data do you need? Do you need data? What's the data that you need? Can you get some advisor to help you train these models, like, right now we have an ex-SEC regulator axis, you have, like, major banks in the US was helping us with this base model, so that the base model becomes in a state that's already almost perfect. So like, even if you don't want it to be tuned, based on your judgments, that's okay. Like the base model does a pretty decent job.

Richard Walker 23:22

So are these customers that are helping you?

Mamal Amini 23:25

This one is an investor slash advisor who's helping us. I also help us and give feedback, like regularly. So we have like customers, and they tell us like, oh, well, how do you make sure that salespeople use it all the time. So we're like, that's one of the things we're releasing next week to essentially have it auto-like, as a plugin to Google Docs and word documents so that you can see the judgments from compliance in real-time, but then you can click accept, reject, accept, reject, they can immediately get the document updated. The document to compliance is going to be like, just a lot easier for salespeople and marketing folk and IR to essentially adopt this.

Richard Walker 24:06

Yeah. Well, I think it's really fascinating that you solve that problem of accuracy or correctness. I had a lawyer in the food industry on this show last couple of months ago, George Salmas. And he talked about a group of lawyers who use chatGPT to help them bolster their case, and they went from the judge, and they quoted case law that does not exist. And they got thrown out of court for it, like you're just making stuff up. You can't necessarily trust the AI is right, but you are building products in order to be able to trust them.

Mamal Amini 24:39

Right, right. That's exactly like one of our first points and one of our constant themes is like it's a trust-building exercise. So we start with almost always the simplest tasks that's already been repeated many times, for example, we're working with two firms that have both an investor advisor and a broker-dealer. So then if they have collateral that's more on one side, we start to then fine tune and get for that specific set of collateral they already have. They're really clear documents, if everything is pretty documented, it makes it a lot easier for the model again, to be fine-tuned towards that direction. And at the end of the day, the compliance officer has to approve or disapprove of how the AI is doing. So the AI is constantly essentially getting tuned into the desires or the understanding and the judgments of the Chief Compliance Officer. So the Chief Compliance Officer at the end of the day is the one who controls how the AI should give the reviews to marketing sales.

Richard Walker 25:34

Yeah, right.

Mamal Amini 25:35

So, trust is always there, my apologies.

Richard Walker 25:38

I had this kind of question in my head to ask you, you've answered it in a couple of ways. But I want to just ask it, instead of me summarize, want to ask it again, and let you kind of direct it, which is, if you are counseling somebody to find a software product that is AI-driven a large language model-driven product, how would you tell them to choose the product? How would you tell them how to go about choosing such a product? What's important in that decision-making process?

Mamal Amini 26:01

I think you need to see how it will actually impact you in your life, rather than just like you want to quantify exactly what it's quantifying for. Because like, in general, I found that a lot of people are just interested in using AI tools. And that's great, because we started building AI tools. And although we're using the previous ones, it's just like, you want to be very process-driven. And this is also a thing you will see with companies, if they're focused on too many different processes. Probably they're not optimizing for one specific thing very, very well to get it perfectly. And that's one, but the odds are, if they're not Google, and you haven't heard of them, if they're not Google, the process that comes out of them, the product that comes out, it's almost practically impossible to get into perfect shape before that process is optimized fully. So we know a few companies, for example, in our own space, that they tried to solve everything for with AI. And that's cool, but at the same time the feedback I hear from clients is, well, yeah, it doesn't do the job. So I think that's one of the obvious signs, you really want to look at the team who's building it, look at the background, because like again, if they're not Google, they don't have the same quality assurances, essentially, you really want to look at the background, because I've seen firms. And don't get me wrong, I love salespeople is I've seen firms that are 60 to 70%, are made up of salespeople, and like if they're not made up of product engineers, or AI scientists, in my opinion, obviously, I'm biased, in my opinion, they're not innovating on the product side, left and right. They're not getting it to the perfect state, they're not able to get it to that level of trust-building exercise, which is exactly what at the very least, I know, lawyers want. Lawyers, or compliance professionals, they want to be able to trust the AI, if they're saying something. That's the whole gist of essentially how to work with lawyers and lawyers really, like actually, like, they really are tech-forward, I've met a lot of tech forward lawyers that are just like really interesting. And they tell me, hey, like, I just need to trust it, that's totally fair. So like, you want to have a level of security, you want to host your models on something like AWS, which is what exactly CIA uses, you want to anonymize your data. So like you really want to care about essentially anonymization and confidentiality of IP. So like, that's really, really important. You want to like, mention that you have the zero the retention policy, which a lot of AI companies don't, and why not, but at the same time, I think these all contribute to how much you want to contribute to the clients, process, and work and daily life.

Richard Walker 28:51

Yeah. So I want to summarize a few things that I've heard in this, this has just been amazing to hear from you on this, when you're looking at a product that's driven by a large language model, like a GPT like AI, number one, you want a company that is built a model that can be trusted and valid. Two, you want to know they're going to consistently build and evolve the model over time, because it's going to change over time, and if they're not doing that, right. So let me ask you one more question. We're gonna have to wrap up here in a second. And you said so many amazing things Mamal, so I don't know if you have one more to add. But what is something you know about AI that maybe most people don't know?

Mamal Amini 29:29

I think these language models as cool as they are a lot of people are now assuming that we're getting close to artificial general intelligence or like AI is going to take over and I think that's a premature way of thinking because what's really happened is like, we had horses and then we got cars, so like, and then it took us like a long time to get to autonomous driving and even then still humans are making the decisions that high-level strategic decisions, and what we've found throughout history is always that we're going to innovate and become innovative ourselves as humans, with the technological capabilities that are out there. AI is not going to take over, it's only going to augment the processes and you can delegate it as if you have, every one of us has an AI assistant, for every task we want to do, I think that's the world we're going to aim for. And that's the world, I believe in the next five to 10 years, that's what's going to happen, every compliance professional is going to have an AI assistant, though, is every other professional literally in the world, you will have an AI assistant, that you can delegate some tasks to it as if you have a real human assistant that you're training. And it does a task that you clearly define, and you give it and then it does it on autopilot, and then it comes back, you check. Good, good to go. So it saves you time, it saves you headaches, and you can focus on more higher leverage tasks.

Richard Walker 30:51

Well, and I think you demonstrate that just by the focus of your product, you're building a niche product for a very specific purpose within specific types of companies. And you're not making that a public model for everybody to consume. So in order to get to this point where there's one model for the whole world, I don't think we'll get there. I don't know that that's ever going to happen, because I don't think companies are gonna share all their models with everybody. And then suddenly, there's this one model that knows the world and controls us all.

Mamal Amini 31:18

Yeah. And also, I think people think about the word data, and they ignore the fact that there's a process. Process beats data, always. If you understand the process very well, even if you don't have the data right now, with these generative models, you can get a really good result.

Richard Walker 31:32

Yeah. Awesome. Mamal, look, as we wrap this up, I have another question for you. But before I ask that question, what's the best way for people to connect with you?

Mamal Amini 31:40

You can find me on LinkedIn, Mamal Amini on LinkedIn, I'm always active regularly.

Richard Walker 31:47

Awesome. Yeah, I found you there. You've got a lot of good stuff going on. Okay, here's my last question. Who has had the biggest impact on your leadership style, or how you approach your role?

Mamal Amini 31:57

It's the guy who founded Monzo. He's been helping us think about this from different angles. And he's really been immensely helpful. Tom Blomfield, we've talked regularly in the last like six months, one of our major investors and yeah, just like been with YC with Y Combinator, and it's just been truly a gift. Because, you know, I think the other day, he told me, imagine the most formidable versions of yourself and become that today.

Richard Walker 32:30

Nice. Imagine the most formidable version of yourself and start becoming that today. I love it. So in other words, imagine your future the way you want it and start living it now.

Mamal Amini 32:40

Yeah, because like, I feel like we always underestimate the time it takes to get there. And it takes longer but like we also overestimate the amount of effort it takes to get there and so we ended up like essentially delaying things almost always.

Richard Walker 32:56

Yeah. Oh my gosh, I've been guilty of that. Oh, that's gonna take two weeks to do that work. I'm gonna put it off put it off put it off. I finally do it three hours. I'm like that in my Texas. Yeah right.

Mamal Amini 33:11

Luckily, I delegate that.

Richard Walker 33:14

Smart, smart and this has been fun. I really want to give a big thank you to Mamal Amini CEO of GovernGPT for being on this episode of The Customer Wins. Go check out Mamal's website at And don't forget to check out Quik! where we make processing forums easy. I hope you enjoyed this discussion, will click the like button and share this with someone or even subscribe to our channels for future episodes of The Customer Wins. Mamal, thank you so much for joining me today.

Mamal Amini 33:41

Thank you. It was a pleasure.

Outro 33:44

Thanks for listening to The Customer Wins podcast. We'll see you again next time and be sure to click subscribe to get future episodes.

85 views0 comments


bottom of page