- The Economic Impact of Generative AI report from the Access Partnership, ELSAM and Microsoft highlights what the economic opportunities of generative AI mean for Indonesia’s industry and workforce readiness. The English version of the report can be accessed here.
- Microsoft has published a five-point blueprint for governments to consider for their AI policies, laws and regulations, which can be accessed here.
- Microsoft Announcesd new Copilot Copyright Initiative Improve Support for When using commercial Microsoft Copilot and Bing Chat Enterprise services and content output, commercial customers’ intellectual property is owned and maintained by us for so long as the customer complies with of Product settings.
- Microsoft releases Azure AI Content Safetyis a new service that enables customers to discover and filter customer-generated and AI content within their apps and services.
Jakarta, 30 October 2023 – The report “The Economic Impact of Generative AI: The Future of Work in Indonesia,” published by Access Partnership in collaboration with ELSAM and supported by Microsoft, reveals that using generative AI to complement work activities could unlock USD 243.5 billion in production capacity across the Indonesian economy, equivalent to 18% of Indonesia’s GDP in 2022.
Darma Simorangir, President of Microsoft Indonesia “A new generation of AI, generative AI, helps us interact with data in new ways: summarizing text, detecting anomalies and recognizing images. Its natural language interface allows us to interact with the technology in everyday language, while its power as an inference engine allows it to identify patterns and derive insights faster. Combining these two capabilities gives every person and organization their own copilot, inspiring creativity, accelerating discovery and increasing efficiency. All of this will have a positive impact on the economy if leveraged responsibly.”
The positive impact of generative AI is so great that organizations of all sizes and sectors, and even individuals in Indonesia, are beginning to incorporate the technology into their work and daily lives, for example, to improve personalization of customer service, increase education on new types of technology, and discover new ideas.
“These examples show that AI is meant to help people focus on the critical elements of their jobs, not replace them. After all, AI can only work with the data provided by humans, and it is developed to augment human capabilities.” Daruma continuation.
New opportunities are still on the horizon. To realise these opportunities, the report details at least three aspects that require our attention, primarily based on responsibility: (1) improving access and utilization, (2) managing risks, and (3) fostering innovation.
Under improvement Access and Use
Improving access to and use of AI requires the right infrastructure and a skilled workforce. Generative AI’s natural language and inference engine capabilities also allow it to be democratized, meaning that individuals will no longer have difficulty using the technology. In fact, they will need to learn new skills, such as prompting, analytical evaluation, and problem solving. At the same time, AI regulations governing the responsible development and use of AI will also play a key role in maximizing the benefits and positive impacts of the technology.
“One of our fundamental principles is that in a democratic society, no one can be above the law. That is why we believe it is appropriate for regulators and policymakers to step up their oversight and consider new laws and regulations. We will continue to actively participate by sharing our experiences and insights on responsible AI practices. We will also: Governing AI: A blueprint for the future“We’re trying to answer the question of how we should manage AI.” Ajar Edi, director of government relations at Microsoft Indonesia and Brunei Darussalam;
Risk management
Efforts to unlock opportunities and mitigate risks are not limited to expanding access or designing comprehensive regulations, but also require concerted efforts to shape responsible AI practices, both in their development and use, which can also be part of a company’s strategy and individual principles for AI use.
“When Microsoft adopted six AI ethics principles in 2018, we recognized that one principle – accountability – underpins all others: fairness, trust and safety, privacy and security, inclusivity, and transparency. This is a fundamental need to ensure that machines can be effectively overseen by humans and that the people who design and operate them remain accountable to everyone else. This means we need to ensure that AI is always under human control. This should be a top priority for both tech companies and governments.” Half-open continuation.
To help build a holistic responsible AI ecosystem, Microsoft has publicly released the Microsoft Responsible AI Standard version 2 and the Microsoft Responsible AI Impact Assessment Report, which are the culmination of years of experience, learnings, and feedback we have received.
Encouraging innovation
The final aspect is finding the right balance between protecting and promoting innovation. As AI policies and regulatory frameworks continue to develop, questions and concerns arise about the use of generative AI technologies in realizing new opportunities. Therefore, fostering an innovative environment requires close collaboration between governments and the private sector.
To fuel this innovation, Microsoft announced three enterprise AI customer commitments, the first of which is the Copilot Copyright Commitment. The Copilot Copyright Commitment provides enhanced intellectual property indemnification support for commercial Copilot services, so that if a third party sues a commercial customer for copyright infringement due to their use of Microsoft Copilot or its output, Microsoft will defend the customer and pay any damages or settlement costs resulting from the lawsuit, as long as the customer uses the guardrails and content filters built into Microsoft products.
For example, these guardrails and content filters are available in Azure AI Content Safety, which is generally available starting October 17, 2023. This new service helps you detect and filter harmful user-generated and AI-generated content in your apps and customer services. Content safety includes text and image detection to detect offensive, dangerous, or unwanted content, including profanity, adult content, gore, violence, and hate speech.
###