MALENA - ESG LLM Usecase
Contents
The following are some of the points mentioned in a talk by IFC at Data AI Summit 2023
Takeaways
- MALENA is a ESG Analyst Platform developed by IFC
- 200 experts work on various areas of ESG at IFC
- IFC
- Member of World Bank Group
- Invested 37 busd in emerging markets
- Look at a range of sustainability aspects for all the investments
- Use case - How to effectively review large number of technical documents relating to a company that are ESG related ?
- MALENA Platform - MAchine Learning ENvironment Analyst
- LLM model can read 19000 sentences a minute
- Read documents and find issues of relevance in the document
- Amount of water consumption
- Social impact
- Highlight Context
- Tells whether a phrase is found in the document, context in which the phrase in the document. Whether the phrase has been found in the positive context or negative context
- What data and AI used ?
- Create ESG Domain specific AI models
- POC in 2019 and scaled up in to a production system
- Large organizations have large data and LLMs need big data.
- IFC has been in business for 70 years and has lots of unstructured data - pdf, words, maps etc.
- Using DataBricks lakehouse, IFC was able to leveraging faster data processing and GPU compute that scales on demand.
- Fine-tuning MALENA
- 1200 risk terms have been
- Over the past 4 years, IFC has used 150k documents to teach MALENA the meanings of various risk terms such as as biodiversity, emissions
- It has read 250 documents for 10k companies in 186 countries and 26 sectors
- More than 32 million risk terms and positive these as positive, negative and neutral
- MALENA can read 19k sentences per minute - 950 times faster than human
- MALENA generates dashboard and displays results
- Document
- Sentiment
- Risk type
- Unlock curated information from tons of unstructured data
- Benefits
- Reduce time on analysis from 30 days to 1 day
- Reducing data and analysis gaps in doing ESG investing
- News, bond perspectives were underused because the investors do not have inhouse capacity to read through unstructured data. With MALENA, it is possible to get it done
- How did IFC use databricks to use Azure and Databricks lakehouse ?
- Majority of the data processed by MALENA is unstructured data
- Three major sources
- IFC data
- Public Disclosure
- Alternate data
- Use Data Factory to organize the ingestion pipeline. There are more than 20 pipelines that are used for ingestion
- Ingested data is used in Next Gen data lake storage
- Three layers of storage
- Bronze: Landing data
- Silver: Bronze data is cleaned conformed and harmonized
- Gold: Ready to consume data
- Use Azure Databricks for data engineering
- Using Databricks for finetuning LLMs
- Transitioned from BERT to RoBERTa
- Use supervised machine learning to train 125 million parameters
- Model is stored in Model Registry on Databricks
- IFC has daily inference spark job running. Every night a spark job that runs and updates the database
- In addition to batch inferences, IFC also does real time inference.
- Both real and batch results are displayed to internal and external interfaces via PowerBI interfaces
- ESG domain AI for Emerging markets
- Translation
- Named Entity Recognition
- Question Answering
- Sentiment Analysis
- MALENA Model Design
- Adopt RoBERTa architecture
- ESG Attention Augmentation Adapter
- Multiple sentiment for the relevant risk terms within a single document
- Typical input is a sentence with a risk term
- Fine-tune the base model for ESG sentiment analysis task
- Classify to positive, negative and neutral
- Add a classification head with RoBERTa
- Use MALENA to predict ESG sentiment
- Data Labeling
- High quality training data is useful for supervised training
- 150k manual labels with 1200 risk terms covering ESG topics
- Quality control
- Labeling guide: All the labelers are following the same approach
- Consensus between at least two labelers
- Active learning - Continuous feedback
- Feedback are captured on the backend and is then fed in to the model
- 45k ESG News, 30k Internal project documents, 35k company disclosures, 18k DFI(Development Financial institutions) disclosures, 7k Independent project evaluations, 15k user feedback docs
- Model Evaluation
- 150k sentences - 85% training set and 15% testing set
- 92% accuracy and 91% f1 score - better than plain vanilla RoBERTa 68% accuracy and 27% f1 score
- Efficiency gains
- Less time spent on data gathering and sorting - more time for analysis
- Early stage due diligence - reduced view from 1 week to 1/2 day
- Reduced overdue supervision text review by 45%
- Risk identification
- Review of 100 legal agreements with 7000 pages reviewed in 12 min
- Trustworthy AI
- Data Security + Data Privacy and Data Transparency
- Transparency dashboard inspired by IBM AI ethics guidelines
- Predictive Performance
- How well MALENA is able to appropriately address the use case - ESG sentiment analysis. Since it is classification problem, one uses accuracy and f1 score
- Fairness
- Difficult to address. Fairness score does it
- Explainability
- How MALENA is able to provide reasoning for the sentiment analysis
- Predictive Performance
- MALENA Roadmap
- Language Expansion
- Improved Model Serving
- Data Automation with Unity Catalog - Delta Share
- Mapping to industry standards
- Climate Paris alignment - use case
- Integration with other IFC systems and data providers
- Establishing a solid and powerful data infrastructure like Databricks lake house with LLM is a key to addressing industry challenges
- Domain Specific AI will become prevalent in the days to come
- MALENA is a great example of how data and AI enables a stronger case of impact investing
OpenAI
For fun, I have asked ChatGPT plugin to get a transcription of the above video
and generate a brief summary. The ChatGPT UI interface could not cleanse the 20k
worded raw transcription and ran in to problems twice. Subsequently I have used
Langchain
to split the transcript in to chunks and obtained the following
cleansed transcript for each of the chunks from OpenAI
Transcript
Good afternoon, and thank you very much for being here. Let’s get started. As we begin, I want to ask everyone here to imagine that you’re an investment analyst. It’s four in the afternoon, and your colleague emails you to say they have to take a project to the investment decision committee next week. They apologize for the short time frame but ask if there’s any way you can do a quick review of all the publicly available information you can find about the project to ensure there are no red flags. In fact, they even have some of this information handy that they can send over if it will help, like the most recent annual report.
You say yes, you’ll do your best to make it happen, and you open the email they send you with a 700-page attachment. My colleagues and I frequently face this dilemma. Maybe you also face this dilemma too.
My name is Atya Karmali, and my colleague Blaise Andwedi and I work alongside 200 experts who cover a variety of topics, including climate. Change in biodiversity and corporate governance at the International Finance Corporation: The International Finance Corporation (IFC) is a member of the World Bank Group and is the world’s largest global development institution focused on making investments in the private sector. This past year, we invested almost $37 billion in companies in emerging markets. When making an investment, we look carefully at the sustainability impact of a client’s operations and consider a multitude of issues, such as their track record for worker safety, methods of pollution prevention, and approaches to engaging stakeholders.
Helping clients meet their financial, development impact, and sustainability goals continues once they’re in our portfolio. This is achieved through input from a team of experts. But let’s not forget about our investment analysts, who begin on page one of that 700-page document.
Successfully delivering this investment strategy means analysts must review between 10 to 20 documents a week. Corporate reports are often hundreds of pages in length and frequently filled with technical jargon. In other words, the amount of information you have to review can be vast, and you often only have a few days, and on one recent occasion, hours, in which to make this happen. Now, imagine you have a colleague who never gets tired, never sleeps, and can read 19,000 sentences a minute.
Meet Malena, or Machine Learning ENvironment for Social and Governance Analyst. Malena is an artificial intelligence, and she has been taught to read documents and find issues of relevance to my colleagues. For example, she can identify the amount of water a company consumed in its operations, the number of women members on a company’s board, or any impacts the company posed to the health of communities in the past year.
Not only does Malena tell my colleague if a word or phrase is found in a document, but she also provides the context or sentiment in which these words occur in the text. For instance, Malena informs her if the words “water consumption” are found in a document, and she explains the context in which they appear. Positive context in a sentence indicates that water consumption has decreased in a company’s operations. Alternatively, Molina informs her if the phrase “women on the board” is found in a negative context, such as in a sentence that discusses how a company still has to meet its board diversity targets. This helps my colleague understand what a company is doing well, where they’re having problems, and allows her to get to the important stuff quickly so she can proceed with making her investment recommendation.
I imagine what you folks are really interested in is how we used data and AI to build Molina. We were able to successfully scale what started as a proof of concept in 2019. This exercise helped us realize that institutions like ours can be particularly good laboratories for developing specialized versions of artificial intelligence applications like Molina.
Because institutions like ours have what these models love, and the reason we’re all here today is data. Having been in business for nearly 70 years, we have a wealth of information at our disposal. The International Finance Corporation has generated a significant amount of data, totaling thousands of terabytes. Much of this data is unstructured, such as text in PDF documents, word maps, and, since we are a financial institution, we also have structured data in tables and databases. By using Databricks Lakehouse, we were able to accelerate the development of custom machine learning applications. These applications are trained on these unique datasets, covering thousands of environmental, social, and governance topics in well over a hundred emerging markets.
Our Vice President describes this process as “bottling lightning.” This is due to the faster data processing coupled with GPU compute that scales on demand. We are successfully designing, training, and running large language models to analyze massive amounts of text using natural language processing. Blaze will talk in detail about how we fine-tuned the Melania algorithms to recognize the taxonomy of over 1,200 risk terms.
Over the past four years, we have used documents dating back to 2001 to create a comprehensive database. This database is instrumental in our efforts to understand and mitigate risks in our operations. We have used one hundred and fifty thousand examples of training data to teach Malena the meaning of terms like greenhouse gas emissions, board composition, or biodiversity management plan. As of today, Malena has read over 250,000 documents and created insights for 10,000 companies in 186 countries across 26 sectors.
We’ve already talked about one of Malena’s strengths, which is understanding context. This understanding has allowed Malena to identify more than 32 million risk terms from all the documents it has read and to classify these as positive, negative, or neutral. Why is this valuable to you? Classifying risks saves time in reading and organizing information, gives you a head start, and helps you understand what a company is doing well and what they may be struggling with.
Malena’s other big strength is speed. It can read 19,000 sentences a minute, compared with human readers who read 10 to 20 sentences a minute. I checked, and that is 950 times faster. Documents like that 700-page annual report are analyzed in a… In a matter of minutes, after reviewing documents, Malena structures the findings and generates dashboards on sustainability performance. These dashboards make the relevant information easy to find and display results through different lenses, like document type, sentiment, or risk type. What does this help you do? At the press of a button, our users can unlock curated information from many lengthy documents, PDFs, and tables running into thousands of pages. They can disregard the non-essential information and focus on what matters. This helps them make better-informed decisions, and it has been a game-changer in how we do business.
For example, we have been able to help our colleagues reduce the amount of time it takes to compile an insightful company profile from weeks to half a day. While this has increased productivity and reduced errors, it has also given our experts more time to scale up what humans, for now at least, still do better than machines, which is unstructured problem-solving.
Or thinking creatively, our approach has empowered our team to leverage their expertise more effectively. By automating the mundane tasks, we’ve enabled them to focus on the areas where human insight and creativity are irreplaceable. This balance of technology and human skill is the cornerstone of our innovative business model. Innovating to find answers to problems for which the rules do not exist, the International Finance Corporation, a development institution with the mandate to address climate change and poverty, responds to pandemics and conflicts. It is harnessing the power of big data and AI by making use of our historical datasets to train custom artificial intelligence applications for use in operations. Additionally, it aims to deliver the World Bank Group’s first AI as a service. While initially conceived as an in-house solution for our experts, Malena is in beta testing with other users.
One of the reasons, or one of the words, that I heard over and over today is about democratizing access. As a development finance institution, we are seeking to provide other users with the capacity to read and classify text and increase investor confidence.
We are hoping to release Malena for use externally in August this year, a full year ahead of schedule. We’re racing to get Malena out to… We aim to provide a solution to a key challenge in meeting the Sustainable Development Goals: the gap in sustainability data. This data gap has been identified as a blocker to making sustainable, long-term investments, especially in parts of the world where development needs are the greatest. Consequently, the gaps in the global goals are the most significant.
This impetus to share Molena is also motivated by research, both our own and that of others. It finds that news impact reports and other corporate disclosures, like bond prospectuses, were underused when assessing the investment performance of companies in emerging markets. This is because investors do not have the in-house capacity to review all this information. Therefore, by using Molena to read and structure this underused information, investors can now easily assess these companies that went uncovered.
But what is the connection between the Sustainable Development Goals and investors? The funding gap to meet the global goals is… The estimated annual funding gap for sustainable development is two to three trillion dollars. In contrast, there are almost 200 trillion dollars invested in capital markets. A small proportion of this, if redirected, can play a crucial role in filling this gap, and Molina can help make this happen. I’m overjoyed to tell you guys how we use Databricks Lakehouse to scale Molina. Oh, thanks, guys. Thank you.
Hi everyone, thank you. So, what’s the infrastructure setup for Molina, and how did we use Microsoft Azure and Databricks Lakehouse to train and deploy a large language model in production? As a reminder, the majority of the data processed by Molina are unstructured text data. Typically, these are lengthy and very complex PDF and Word documents.
These documents come from three main sources. The first data source is IFC data, which typically includes IFC project disclosures, investment review summaries, and project supervision texts. The second data source comprises company public disclosures, annual reports, sustainability reports, bond practices, and The third data source is alternate data, including daily news reports. How does the data ingestion process work? We use Azure Data Factory to organize our data ingestion pipeline. Every day, we have more than 20 of our data engine pipelines running in the backend to update data, pulling in the most recent news articles and annual reports. Then, the ingested data is stored in our Azure Data NextGen platform.
Our web storage approach is aligned with the Databricks lakehouse storage principle of having three layers of storage. Bronze is where we land our raw data, which comes from internal and external systems. Silver is where the bronze data are cleaned, conformed, and harmonized.
Gold data storage is where we typically store the already prepared data for consumption. The benefit of storing those complex PDFs and Word documents in the data lake is particularly valuable because we have the capability to manage and analyze large volumes of data efficiently. We have established a system between Databricks and Data Lake that allows us to efficiently perform additional data engineering tasks. These tasks include documented deduplication and name entity convention, among others. However, what exactly are we doing with Databricks apart from data engineering?
The first answer to this question is that we are utilizing a language model. We started with Google’s BERT back in 2020, and now we have transitioned to RoBERTa, an open-source English language model released by Meta. As I mentioned, we use supervised machine learning to function on the RoBERTa model, which has 125 million parameters. The function model is safely stored in Databricks' model registry, which includes all the YAML files, the model weights, and the function model’s performance metrics, such as accuracy, F1 score, and others.
Next, regarding running inference, we have a daily Spark job that utilizes the function model. Additionally, every night at midnight, we have a scheduled job that runs and updates the system. The inference database, in addition to batch processing, has been enhanced. For instance, we complemented those batch and finance jobs with real-time inference jobs via Azure Functions. This was in response to client requests for real-time influence by the model. Both real-time inference jobs via Azure Functions and batch inference are displayed to the end-users, both internal and external, via Power BI interfaces.
How does this setup allow us to excel? We have been able to train, function, and deploy AI models for various tasks. These include language translation, named entity recognition, and question-answering using GPT models. Recently, our most mature use case is the EEG sentiment analysis. I will delve deeper and explain how we accomplished this, specifically how we used data lakes, Lexiles, and Databricks to train the sentiment analysis model.
Here is the marine architecture design. One of the first choices we made back in 2020 was to adopt the Transformers architecture because it’s Typically, our model performs better than others in architectures like Long Short-Term Memories. The blue box here illustrates the base model layers, the RoBERTa layers, and then we apply a transformative addition to add an EEG augmentation attention layer. This allows the model to be more focused on EEG-specific risk terms, such as climate governance, corporate governance, social, and environmental terms. Most importantly, this is helpful when there is more than one risk term present in a sentence, as it enables the model to predict multiple sentiments within the same sentence. This feature is unique to the Malena model.
So, how does this work in practice? A typical input sentence for Malena might be one from an orchestra, where “climate” is the first risk term. The second risk term that I was talking about could be “CO2 emissions.”
Then, the next step is fine-tuning. We use the base model, RoBERTa, and on top of this, we add the labeled data. Remember, it’s supervised machine learning, so we train the model with data that includes the correct answers. This way, the model learns to recognize and predict the sentiment associated with various risk terms in different contexts. In learning, we add labeled data, and because RoBERTa is a foundation model, it predicts the next word in a sentence. We complemented RoBERTa with a classification head along the model, the functional model, to conduct sentiment analysis. Typically, the predictor identifies three types of sentiment: negative, neutral, and positive. Once we have the functional model trained, saved, and registered, we can use it for prediction. This is a typical output of the model.
Remember, we use supervised machine learning, and supervised machine learning means that we need labeled data. High-quality training data is crucial; it’s very important for any supervised machine learning. We observe that the model’s performance increases with better and cleaner labeled data.
So, we asked our specialists to help us create more than 150,000 training data points, covering all aspects of energy, including environmental, social, climate, and corporate governance terms. We apply some very… Strong quality control is integral to the process of creating label data. The first quality control measure we apply is to define very robust and comprehensive labeling rules to ensure that all labelers are aligned and following the same rules. Next, we only use a data point to train the model where we have consistency between at least two labelers.
Because we know there are some costs associated, we have embedded an active learning feature in the application. This allows active users to provide feedback on the model’s predictions. This feedback is then captured on the backend and reviewed one more time for a sanity check before using it to refine the model and the training data.
Coverage is completely aligned with our infrastructure data, covering EEG news reports, our internal project documentation, company public disclosures, and, as I mentioned before, the feedback from the 15,000 active users on the model. Prediction: So, how did we evaluate the model? We evaluated the model by splitting the training data into two subsets. The first subset comprises approximately 95 percent of the data for training, and the remaining 15 percent is used to test the model on unseen or unlabeled datasets. Through this process, the current model achieved an accuracy of 92 percent and an F1 score of 91 percent, which is significantly better than the out-of-the-box sentiment analysis models that we have tested so far.
One of the benefits of this kind of model, and for us the main benefit, is their scalability葉heir ability to analyze a large amount of data in a consistent way. Agile has already talked about some of the benefits we have seen, such as reducing the amount of time our specialists take to review documents.
Another example is the reduction of 45 percent in overdue supervisory tasks for review. Additionally, there is the capability of the model to… We can assign a sentiment score, an EEG sentiment score, to a specific document and then use this score to prioritize document review. One last example is our ability, through Databricks Lakehouse, to review hundreds of legal agreements consisting of seven thousand pages just within 12 minutes. But let’s address the elephant in the room.
There have been a lot of media news raising some concerns about using AI. That’s why we think that trust is critical. Users and stakeholders must trust that their data privacy is well respected, that the data is used transparently, and also that their data is protected and maintained securely.
That’s why we are developing a data and model governance framework covering AI ethics, fairness, and explainability on how the model makes predictions. In this way, we think that it can increase trust and confidence in the model’s predictions and ensure that technologies like these are adopted in an inclusive manner. To implement an equitable approach, we have developed a transparency dashboard comprising six pillars. Inspired by the IBM fundamental properties for AI fairness, our dashboard covers model predictive performance, fairness, and explainability. For us, these three aspects are the most important. The predictive performance of our model, which we call Molena, measures how well it can appropriately address the use case in our specific situation, such as conducting EEG sentiment analysis. Since we’re dealing with classification problems, we typically use model accuracy and the F1 score to assess this pillar.
The next pillar is fairness, which is the most difficult to assess but a crucial addition. Fairness is the concept of justice. The fairness score of Molena evaluates the model’s capability to predict EEG sentiment without unfairly penalizing any specific group of the population. This could pertain to a specific country, a particular collision, or a specific sector.
Lastly, explainability is key to understanding the model’s decisions. However, the transcript ends abruptly, so the discussion on explainability is incomplete. The third most important aspect is explainability, which is another great addition, especially for end users. The explainability score assesses how well Malena is able to provide some reasoning behind the sentiment prediction. We implemented this at the model level, which is typically for those who have access to the model to manage bias, for example. But it’s also at the sentence level, and this is particularly useful for the end users.
Looking at our roadmap for the future, over 2023, we plan to expand the Malena capability to analyze different languages. Next week, on Friday, we’ll be deploying a new feature that will enable the analysis of Spanish, Portuguese, and French documents.
The next thing we want to do is implement Databricks model serving, especially because we would like to improve our real-time serving efforts. This will be particularly useful because, in August, we will be making it available for public use. So, the last thing we want to… Implementing Unity Catalog is a significant step for us. We’ve heard a lot today about Unity Catalog. For us, implementing it will be a good setup to leverage another Databricks product, known as Delta Sharing, to typically share data across IFC and other institutions. So, why is all this important to you? We believe that establishing a solid and powerful data infrastructure, like a database lakehouse combined with LLMS, is key for addressing industry-specific challenges.
Secondly, we think AI, combined with data, is crucial for addressing entity-specific challenges. Domain-specific AI is very important for addressing industry-specific challenges, and Malena is one of them.
Finally, IFC’s Malena is a great example of how a combination of data and AI enables stronger investment confidence in ESG-positive impact investing. Remember that $200 trillion and their potential positive impact on emerging markets. We invite you to join us in this endeavor. Amplifying this positive impact through Molina . Thank you.
ChatGPT - Summary
Asked ChatGPT to summarize the transcript and here is what it said,
Run-1
The transcript is from a presentation about an AI tool named Malena, developed by the International Finance Corporation (IFC), a member of the World Bank Group. Malena, a Machine Learning Environment for Social and Governance Analysis, assists investment analysts by quickly reviewing extensive documents for sustainability and risk assessment. The IFC, focused on private sector investments in emerging markets, invested nearly $37 billion in the past year, emphasizing sustainable impact.
Key aspects of Malena include its ability to read and analyze vast amounts of text rapidly, identifying specific environmental, social, and governance (ESG) terms and their context in documents. This capability enables analysts to efficiently sift through hundreds of pages of technical documentation, highlighting potential red flags or areas of interest for investment decisions.
The development of Malena started as a proof of concept in 2019, leveraging IFC’s extensive data accumulated over 70 years, including unstructured text in PDF documents and structured data in databases. Using Databricks Lakehouse for data processing, the IFC was able to create custom machine learning applications for their unique dataset. This process involved training large language models using natural language processing techniques.
One of the primary advantages of Malena is its speed; it can read 19,000 sentences per minute, significantly outpacing human analysts. This efficiency translates into tangible benefits, such as reducing the time to compile a company profile from weeks to half a day, thereby increasing productivity and reducing errors. Malena also supports unstructured problem solving and creative thinking by handling routine data analysis tasks.
The AI model, initially based on Google’s BERT, later transitioned to RoBERTa from Meta. It involves supervised machine learning and a fine-tuning process using a labeled dataset covering various ESG aspects. The model’s performance, evaluated using a split of training and test data, achieved a 92% accuracy and a 91% F1 score.
In terms of model governance, the IFC emphasizes trust, ensuring data privacy, transparency, and security. A transparency dashboard comprises predictive performance, fairness, and explainability metrics. The roadmap for Malena includes expanding its capabilities to analyze documents in multiple languages and improving real-time model serving.
The presentation underscores the significance of combining AI and data to address industry-specific challenges. Malena exemplifies this by enhancing investment confidence in ESG-positive impact investing. With a global funding gap of 2-3 trillion annually for sustainable development goals, the potential positive impact of Malena in mobilizing a portion of the $200 trillion in capital markets towards these goals is substantial. The IFC plans to release Malena for external use, democratizing access to its capabilities and contributing to sustainable investment practices worldwide.
Run-2
In the first go, ChatGPT missed the content from the second speaker. When I specifically asked for the second speaker talk summary, it gave the following summary
The second speaker in the transcript focused on the technical aspects and infrastructure setup of Malena, an AI tool developed by the International Finance Corporation (IFC) for analyzing environmental, social, and governance (ESG) data in investment documents. Key points of the talk include:
Data Sources and Ingestion: Malena processes primarily unstructured text data, like PDF and Word documents, sourced from IFC data, company public disclosures, and alternate data like news reports. The data ingestion process uses Azure Data Factory and is stored in Azure’s data platform, adhering to Databricks Lakehouse storage principles with three layers: bronze (raw data), silver (cleaned and harmonized data), and gold (consumption-ready data).
Model Training and Deployment: Initially, Google’s BERT was used, but later they transitioned to RoBERTa, an open-source language model by Meta. The model is fine-tuned for ESG sentiment analysis, using labeled data to predict sentiments as negative, neutral, or positive. Model performance metrics like accuracy and F1 scores are tracked, and the model is deployed for both batch and real-time inference, visible to users through Power BI interfaces.
Model Evaluation and Labeling Process: For model training, over 150,000 examples of training data were created, covering a wide range of ESG aspects. The training process involved robust labeling rules, consistency checks, and active learning features for user feedback. The current accuracy of Malena is 92%, with a 91% F1 score.
Governance and Trust: To address concerns about AI usage, a framework covering AI effects, fairness, and explainability is developed. A transparency dashboard includes predictive performance, fairness, and explainability scores, aiming to increase trust and confidence in Malena’s predictions.
Future Roadmap: Plans include expanding Malena’s language capabilities (adding Spanish, Portuguese, and French), implementing Databricks model serving for better real-time performance, and leveraging Unity Catalog for data sharing. The tool is set to become publicly available, aiding in addressing the global funding gap for sustainable development goals.
The speaker emphasized the combination of AI and data in addressing specific industry challenges, highlighting Malena’s role in enhancing investment confidence and its potential impact in mobilizing capital towards sustainable investments.