AI Safety: Alignment – after 10 weeks

I am sending my BlueDot’s AI Safety: Governance application when I remembered how I applied for my Intro to Transformative AI course before – Lilian-Ang (product manager in BlueDot) said my application was very reflective & I got in, Alhamdulillah!

It’s not perfect; but that’s the one thing I realized and my very good friend, R, that we should foster more reflection & vulnerability too. Even we may not get in, (to be honest, I got in for AI Safety after my second try) we will learn a lot from the writing we do and preparations we take prior.

With that intention, here is my application. Enjoy!

How do you expect this course will help you contribute to AI safety?

You should specify:

  • Concrete career steps you’ll take after the course
  • How the course will help you take these steps
  • How this will contribute to AI safety, and ideally AI governance

This course would accelerate some of the works I have in pipeline. In the next 6 months and 1.5 years, my focus is on sustaining my responsible AI startup while pursuing a part-time MA in AI development with a focus on super tiny language models alongside AI Safety Camp research (if accepted) carrying forward on how multi-worldviews influenced AI development and safety understanding.

To be honest, it’s super hard to find the right supervisor for my intention there: the intersection between AI & AI Safety. Hence, because of this challenge, I’m pivoting a bit to consider doing my MA (locally) in governance.

Participating in the Transformative AI course would be instrumental in accelerating my academic and professional goals. By diving deeper into the technical and philosophical underpinnings of advanced AI systems, I would gain invaluable insights that I can directly apply to the development of my startup’s super tiny language model and in my offering to the customers/ community. The opportunity to learn from leading experts in the field and collaborate with a diverse cohort of fellow researchers and practitioners would further sharpen my understanding of the multifaceted challenges in AI safety and governance.

I didn’t regret joining (read: adding in another headache while juggling that week with 2 other courses: AI Safety and Intro to EffectiveAltruism) because even there’s overlap between TAI course and AI Safety course, but the live discussion & the cohort members are very different – I get to consider more non-tech & startup POVs in the TAI, something I dont really get in AI Safety cohort of mine.

Within 2-3 years, as the community building solidify more, I hope to have see more progress in terms of alternative hardware we’re experimenting for my startup (powering the super tiny language model) as well as completing my MA degree. I then plan to enroll in a PhD program studying the history and building frameworks for AI safety and governance from an Islamic perspective, building on my community works & technical startup experience that we’ve building in these prior years.

I didn’t know that just 2 months after I wrote this, I got to meet my co-founder and we’re strategizing more as we speak.

I also didn’t plan to meet Dr Hazieqa Hamzani where my hunch to study about philosophy of mind (the fundamental study of AI & if I can say, the most sustainable part too, ie beyond the hype & everygreen question) was solidified. We got to talk about Heidegger, Mind-Body problem, and how the US (and we, included) didn’t tap into the great evolution of this field of philosophy of mind in the tech. Awesome, right?

From years 3-6, my goals are growing the business in terms of the product while deeply work on my PhD research interests, perhaps 1-2 fellowships as a PhD student. Insights from this will guide how I internationalize the company in an ethically and culturally aligned manner, while technically ahead. I would love to collaborate with international universities that lead breakthrough studies eg Max Planck Institute (but this may change as AI development can be very fast). 

I’m considering myself more boldly too (as a doa) to study at Oxford Internet Institute. I have emailed with Dr Ana Valdivia too (on compute governance) and I’m very keen to go down this path, now.

Longer-term, I want to establish myself as a thought leader at the intersection of technology, policy and Islamic scholarship. Publications and speaking engagements are priorities, advising stakeholders on culturally-informed AI strategies, which this Transformative AI course would be a great resource to accelerate some of these moving parts. 

How have you engaged with the AI safety field so far?*

This could include things like:

  • Events you’ve organised or attended
  • Projects you’ve worked on
  • Blog posts you’ve written
  • Resources you’ve read or watched

Started to be more proactive. 

  • I have participated in an AI Safety Camp last year (researching on uncontrollable risk of AGI) and I have been exploring machine learning applications (eg LLM Engineering, finetuning with Udemy and Youtube, to deepen it from technical POV), giving me an deeper understanding of the field; I aim to increase my familiarity through further collaborative projects and focused research around ensuring advanced AI is developed and applied safely. I am keen to venture more into AI governance as it is related and I see the needs to engage in this topic even from Malaysia/ Southeast Asia standpoint. 
  • I am currently in Cohort 11 of AI Safety Fundamentals (by BlueDot) alongside the 8-week Effective Altruism virtual course to accelerate my expansion of understanding. I can see the evolution and expansion in terms of what I understood about AI Safety, existential risks, and effective altruism in accelerated way due to these courses, which impacted on how I restrategize my current & future actions. I am glad that I got accepted after 2nd application, months after I improved my portfolio due to the 1st rejection. 
  • In terms of resources, many resources in those 3 programs above enriched me, which led me to work closer to follow in the footsteps of my favourite AI philosophers who work directly in AI ethics and somehow AI governance, Prof Luciano Floridi. I have been following his works in the past 3 years but his genius in bridging classical works with current fast-paced AI is mesmerizing. 
  • I have organized community AI book club revolving Ibn Tufayl’s Hayy Ibn Yaqzan (which inspired Tarzan) on understanding nature of man, in building AI, and we did Aristotle for Everybody too, recently, so to bridge the junior engineers into the humanity side of AI development, from a unique POV. We believe that AI Safety and AI Alignment could and should be enriched by different traditions, policy-wise, engineering-technically speaking, and culturally. I am currently waiting for the result of Kairos AI Safety fieldbuilding too, although, not waiting vainly, Verdas AI (who work on AI Ethics in Europe and pro-bono in Malaysia) and I are in talks to start fieldbuilding in 2025 with the government of Malaysia, due to our beginning experience presenting and sharing about AI Safety & AI Ethics recently to bridge the gaps more. 

What skills have you developed that could be used to advance AI safety?*

This question allows us to place you in a cohort at your level of policymaking and technical expertise. You could tell us about:

  • Programming projects or ML courses you’ve done (at university or self-taught)
  • Government, think tank or similar work related to policymaking, especially on technology
  • Politics, international relations or law courses that gave relevant skills

In founding my responsible AI startup, I have gained real-world experience consulting with government agencies and civic groups on deploying new technologies responsibly. This has strengthened my capability for multi-stakeholder engagement around complex tech issues. Alongside this, thanks to frugality, technical upskilling also helped us in a good way, where I did learned LLM Engineering, fine-tuning models and more alongside this. 

My undergraduate coursework in computer science, statistics and biology, focusing on machine learning and programming projects, did provided technical grounding that helps me meaningfully contribute to discussions faster in blending computational and social-philosophical perspectives. I’ve further supplemented this with casually re-reading Maths books and an ML certification in CodeAcademy.

• However, the most highlighted skill would be in communication and leadership skills; learning about AI Safety nudged me to do more in the society. I have presented Mabadi Asharah and moral dynamism opportunity in AI scientific tradition to advisor team to Prime Minister of Malaysia, Anwar Ibrahim. I have also involved for several years in social-political activism, although the focus wasn’t on technology, but I have developed skills in advocating and writing as well as event management in the process. I aimed to improve more on these skills further for AI safety as now I am venturing my path to work with more stakeholders (government agencies and universities) in AI. I am taking short course on Policy too just to support my lack of skills and experiences in the area with Aspen Academy and re-engaging with my rapporteuring experiences few years back with state government. 

Tell us about one of your achievements you regard as most impressive, or that you’re most proud of.*

This should ideally demonstrate agency, ambition and an ability to get things done. If you’ve done something uniquely impressive in your work, or it is already highly relevant to AI governance we recommend you explain this.

If not, we recommend you explain your contribution to something outside your job. Strong candidates often describe projects that require unique skills and dedication, or projects that benefit the public or a large community.

My recent proudest moment would be to be able to present to the close advisor of PM of Malaysia, Anwar Ibrahim, and to share the draft with him on the entry works on Islamic tradition with AI scientific tradition. This opportunity was due to Philosophical Society I was in and I was adamant to bridge more this year philosophy and industry, and AI could be the golden door. 

Other than that, I was also proud to won 1337 Female Pre-accelerator with an impact AI-based apps, my MVP responsible AI startup that aims to ensure technologies developed in Malaysia uplift society through ethical innovation and cultural sensitivity, in this case helping B40 enterpreneurs shorten their time to digitalize and handle their manual business processes. 

The pre-accelerator was result from my volunteering for 4 months under Parliamentary Fellowship in digitalization in 2021, where I realized many solutions were not properly considering local needs and values. I decided to found a startup committed to co-creating technologies with impacted communities through participatory processes. I could also see this case in AI Safety and hence, the challenges with transformative AI. 

In the early R&D phase, my most important contribution was conducting extensive interviews and focus groups and synthesize perspectives into formal design frameworks. This helped uncover diverse views on privacy, equity and human-AI collaboration priorities that guided our technical approach. My role in leading this small initiative taught me a lot in balancing ‘activism’ and ‘business’. 

Moving forward, I hope to see my dedication to embedding inclusion and responsibility from the inception of philosophical understandings could open up more doors hence, enrich AI Safety and AI Alignment. To be honest and vulnerable, to do this while also balancing being a good mom, wife, sister, friends, and mentors is something I personally keen to see more too, as AI Safety-Alignment is personal too to me: how we see ourselves as human in the mirrors of these machines.

Leave a Reply

Your email address will not be published. Required fields are marked *