What inefficiencies can we eliminate through bug fixes using A.I. and non-native apps?
… I hope this works out for Ubuntu and there is enough leverage to free up space and create a cracking system.
Thanks and discuss…
What inefficiencies can we eliminate through bug fixes using A.I. and non-native apps?
… I hope this works out for Ubuntu and there is enough leverage to free up space and create a cracking system.
Thanks and discuss…
I’ve encountered so many problems with AI it’s not even funny. I think this would create bugs, not solve them. There’s a potential, but we haven’t reached it yet.
There is potential, but as I corrected one AI session on a convoluted reply which would have sent me down a rabbit hole we need a “prompt generator” which talks Ubuntu lingo. The beast actually agreed. So my thinking is to have a front end layer using say Albert to generate (rather than type) the usual prompt. A front end interpreter linked to context such as manuals, url’s, bug reports and the like. The AI element should be context driven rather than expecting answers from casual prompts.
I haven’t had an issue with prompting per se, but with getting logic correctly. I was asking for some help with a fairly convoluted awk predicament the other day. While being kind of complicated to my human mind, it’s really just simple text processing, so it should be rather trivial to solve. It ultimately led me in the right direction but couldn’t give me the correct answer. It was close, but still far away. Even when told it was wrong and how it was wrong, it didn’t change the result.
I hear this often from people using ChatGPT’s free tier. Some of the other organisations (such as Anthropic) have better models. Also, the paid ones tend to be even better.
While I agree that there are problems with LLMs, as a counterpoint, here are some things I have used LLMs for recently and a real-world example:
Let’s do that last one, to show you how it can be successful:
Goal: Create a “Last week in” report about activity that happened last week
Steps:
gh to pull down all the issues and pull requests from Sunday, 16th March to Saturday, 22nd March.You are tasked with creating a weekly report for La Suite numérique open source tools, primarily Docs and Meet. This report should cover the issues and development activity of the previous week. You will be provided with four JSON files containing data from GitHub. The bot activity has been stripped out as it's generally not of interest.
Here is the JSON data for issues that were originally opened by non-staff community member and closed last week:
all-community-nonbot-issues-last-week.json
Here is the JSON data for issues that were originally opened by La Suite numérique project staff closed last week:
all-staff-nonbot-issues-last-week.json
Here is the JSON data for pull requests originally opened by non-staff community member and closed last week:
all-community-nonbot-prs-last-week.json
Here is the JSON data for pull requests orig9inally opened by La Suite numérique project Staff members and closed last week:
all-staff-nonbot-prs-last-week.json
Your task is to analyze this data and create a report in markdown format. The report should include:
1. An executive summary
2. A table of data with understandable numbers
3. More detailed information about interesting, important, or critical achievements from the previous week
Follow these steps to create the report:
1. Parse and analyze the JSON data:
- Extract relevant information from both JSON files
- Identify key metrics such as number of closed issues, types of issues, and important development activities
2. Write the executive summary:
- Provide a brief overview of the week's activities
- Highlight any significant achievements or trends
- Keep it concise, around 2-3 sentences
- Primarily focus on the community created issues and pull requests
- Also mention the La Suite numérique staff team created issues and pull requests
3. Create a data table:
- Include metrics such as:
- Total number of closed issues
- Number of bugs fixed
- Number of features implemented
- Number of documentation updates
- Present the data in a clear, easy-to-read markdown table format
4. Detail important achievements:
- Identify 3-5 of the most interesting, important, or critical items from the week
- For each item, provide:
- A brief description of the issue or development activity
- Its impact on the project
- Any notable challenges or solutions
5. Format the final report:
- Use markdown formatting for headers, lists, and tables
- Ensure the report is well-structured and easy to read
Your output should be the complete markdown report, structured as follows:
<report>
# La Suite numérique Open Source Weekly Report
This report covers the community activity in La Suite numérique Open Source Projects from [Start date] to [End date].
## Executive Summary
[Insert executive summary here]
## Weekly Metrics
[Insert markdown table here]
## Key Achievements
[List 3-5 important items with descriptions]
</report>
Remember to use appropriate markdown syntax for formatting, including headers, lists, and tables. Ensure that the report is informative, concise, and provides valuable insights into the week's activities for La Suite numérique open source tools. Consider a disclaimer near the bottom which mentions this is based on issues and pull requests closed in the period specified. Even more work goes on in issues and pull requests that aren't closed yet. Call to action to get involved, link to github.com/suitenumerique
gh (all staff and community issues and pull requests, with bots excluded)Create report for "Last Week in Suite Numerique".
Here’s the resulting report it generated. This would require human review, of course, but I think it’s a pretty start.
This report covers the community activity in La Suite numérique Open Source Projects from March 15, 2025 to March 22, 2025.
The past week saw significant community engagement with 12 community issues closed and 31 pull requests merged across La Suite numérique projects. Key developments include Dutch language support in Docs, improvements to OIDC implementation, and the release of Docs v2.5.0 and v2.6.0. The community demonstrated strong interest in the project following recent visibility on Hacker News, with many feature requests and documentation improvements.
| Metric | Community | Staff | Total |
|---|---|---|---|
| Issues Closed | 12 | 2 | 14 |
| Pull Requests Merged | 31 | 8 | 39 |
| Bug Fixes | 5 | 3 | 8 |
| Features/Enhancements | 7 | 0 | 7 |
| Documentation Updates | 3 | 1 | 4 |
| Releases | 4 | 0 | 4 |
| Repository | Issues Closed | PRs Merged | Total Activity |
|---|---|---|---|
| docs | 11 | 18 | 29 |
| meet | 1 | 7 | 8 |
| people | 0 | 10 | 10 |
| design-system | 0 | 4 | 4 |
Several community members discussed OIDC support for authentication in Docs, highlighting the importance of SSO for enterprise deployments. The team confirmed that OIDC is supported and shared information about a separate product under development called “People” for user management. This demonstrates the project’s attention to enterprise needs.
Reference: Issue #735
The team activated Dutch language support in Docs, expanding the international accessibility of the platform. This follows community contributions of Dutch translations and reflects the project’s commitment to internationalization.
Reference: PR #742
Two Docs releases (v2.5.0 and v2.6.0) were completed during the week, bringing numerous fixes and improvements. Notable additions include document pinning functionality, improved accessibility features, and fixes for the Ctrl+K search modal behavior in the editor.
Several UX improvements were made to the Meet product, including a screen sharing troubleshooting modal, setup guide, and accessibility updates. These enhancements will help users resolve common video conferencing issues more easily.
Reference: PR #397
Multiple commits focused on improving the developer experience, including fixing helmfile linting, updating the README with clearer instructions, and addressing issues with Object Storage compatibility. These changes lower the barrier to contribution and deployment.
The community showed active interest in the project this week, with contributions ranging from documentation improvements to bug fixes and feature requests. Several first-time contributors made PRs, indicating growing awareness of the project.
The addition of Docs to Awesome code.gouv.fr will increase visibility within the French government open source ecosystem.
Reference: Issue #761
Disclaimer: This report is based on issues and pull requests closed in the period specified. Even more work goes on in issues and pull requests that aren’t closed yet.
Get involved! Interested in contributing to La Suite numérique open source tools? Visit github.com/suitenumerique to explore the repositories and find ways to participate.
So yes, it can be flawed, but it can be very powerful too.
I agree with this.
I have used the free version of ChatGPT to help write basic bash scripts for my personal usage.
Cross-checking is important as is testing.
Most of the time the scripts worked as intended.
Some required asking the LLM to adjust and I found that you need to give very precise, bite-sized instructions for it to be able to function correctly.
Double post. See below. Hallucinating.
Expanding on this I use amongst other tools Phind.com (paid) and recently when it went off course I advanced my own plan and the tool agreed that my plan was better.
In summary I am developing a “prompt generator” a front end framework, where I gather together in advance of prompt the references in context. I thought of placing these context references (URL’s) in a Zotero collection. Then the next idea is to use Albert as the tool to generate the prompt to inject into Phind. So I am moving away from creating prompts on the fly but rather in a more structured way. Like a translator. It is less important to offer byte size chunks of prompts but rather a stream of context driven objects. That is where I am, experimenting with this new “language” to communicate structured prompts to Robot Gort - launch Albert query field by [Ctrl+Space] to understand the visual metaphor. The focus will be on an Albert Python extension to “drive” Gort (whichever AI engine you choose). And building context collections in Zotero.