Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
The New Contributors Dashboard analyzes the participation of new contributors in the project. It provides a leaderboard of the number of new contributors and ranks them based on their contributions over a selected period.
Sustainability and Succession Planning: The continuous involvement of new contributors ensures the long-term sustainability of open-source projects. As existing contributors may move on or take up different responsibilities, new contributors play a vital role in filling those gaps.
Fresh Perspectives and Ideas: New contributors bring fresh perspectives, ideas, and diverse skill sets to the project. They may offer innovative solutions, identify areas for improvement, and contribute to the overall project evolution.
A Productivity page provides consolidated insights to enhance efficiency and measure productivity in software development.
It offers visual representations of data, such as commits per active day, New contributors, Drifting away contributors, Engagement gap, Work time distribution impact, and Effort by pull request size, allowing contributors and project managers to monitor progress, identify bottlenecks, and optimize workflows.
By providing real-time information and actionable analytics, the productivity dashboard empowers open source projects to make data-driven decisions, improve collaboration, and deliver high-quality software more effectively.
The Commits per Active Day Dashboard provides insights into code commit frequency on active development days. It measures the average number of code commits contributors make on active development days.
Early Issue Detection: A higher number of commits per active day increases the likelihood of early issue detection. Regular code commits provide more opportunities for contributors to identify potential issues or bugs during the development process.
Code Quality and Stability: A consistent number of commits indicates ongoing code enhancements and maintenance, leading to improved code quality over time.
Productivity Assessment: A higher number of commits per active day suggests that contributors are actively working on code changes, implementing new features, fixing bugs, and making improvements.
The Drifting Away Contributors metric focuses on identifying contributors who were once active in an open-source project but have gradually become less engaged over time.
This chart is not impacted by time filter changes. That means the data will always show with respect to "today".
Drifting Away Contributors are:
Users who made at least 5 code contributions at all times for the project.
At least one of those contributions must be made in the last 6 months.
The contributor disappeared over the last 3 months.
The Drifting Away Contributors metric is essential for maintaining a healthy and active contributor community. By identifying contributors who are gradually becoming less engaged, project managers can take proactive measures to understand their reasons for disengagement and find ways to re-engage them.
The Work Time Distribution Impact Dashboard analyzes how contributors allocate their work time and the impact of different activities on project progress.
The chart shows the trends of commits and finds patterns if long hours, non-business hours, or weekends have contributed to burnout. Burnout can be thought of as a lower number of commits over a long period, before which there was heightened activity (commits).
The purpose of the chart is to find out if there is a risk of burnout among contributors due to long hours.
Workload Distribution: Monitoring the Work Time Distribution Impact helps identify potential workload imbalances among contributors. If one or a few contributors are consistently spending a disproportionate amount of time on specific activities, it can lead to burnout or reduced productivity.
Performance Evaluation: The Work Time Distribution Impact metric can contribute to performance evaluation and feedback processes. By analyzing how contributors allocate their work time, project managers can identify patterns of efficiency or areas that require improvement.
The Engagement Gap metric measures the difference between expected and actual levels of contributor engagement. The dashboard shows the ratio of the difference between the contributor who comments the most over PRs vs. the contributor who comments the least.
Performance Assessment: The Engagement Gap metric enables you to assess the project's overall engagement level by comparing it to the expected or desired level. It provides a quantitative measure of how actively contributors are participating and helps identify any gaps between the expected and actual engagement levels.
Community Health: The Engagement Gap metric provides valuable insights into the health and dynamics of the project community. Large engagement gaps may indicate potential challenges, such as communication issues, a lack of mentorship, or unclear contribution guidelines.
How to Improve Engagement Gap?
Set Clear Expectations: Clearly define the expected engagement levels for each project to provide a benchmark for comparison.
Encourage Collaboration: Foster a culture of collaboration within your team by encouraging open communication and sharing of ideas.
Provide Feedback: Regularly review the Engagement Gap metrics with your team and provide feedback on areas that need improvement.
Recognize Achievements: Acknowledge and reward team members who actively contribute to reducing the Engagement Gap, motivating others to follow suit.
The Effort By Pull Request Batch Size metric analyzes the relationship between the size of pull requests (measured by lines of code changed) and the time contributors spend reviewing and merging them.
Here are ways you can interact with the chart to gain deeper insights:
Filter by Date Range: This allows users to analyze the metrics across different periods to observe how the trends have evolved.
Compare Trends: The chart compares previous periods, enabling you to spot differences or improvements.
The metric shows the distribution of pull requests across different size categories, from "Very Small" to "Gigantic."
This information can help identify potential bottlenecks or areas where the team may need additional support or process improvements.
The metric tracks the average time required for reviewing and merging pull requests in each size category.
Longer review and merge times for larger pull requests may indicate a need for better code organization, more thorough review processes, or additional resources.
The metric includes information on the number of participants and comments associated with each pull request size category.
Higher numbers of participants and comments for larger pull requests suggest increased collaboration and coordination efforts, which can be both positive (better code quality) and negative (potential delays or inefficiencies).
Development Cycle Time: The Effort By Pull Request Batch Size metric provides insights into the overall development cycle time. By analyzing the relationship between batch size and effort, you can identify trends that affect the time taken to review and merge pull requests.
Review Efficiency: The Effort By Pull Request Batch Size metric helps project managers evaluate the efficiency of the pull request review process. By analyzing the effort required for different batch sizes, You can identify patterns and trends that impact the speed and quality of reviews.
Q: What does the Effort By Pull Request Batch Size metric measure? A: It measures the relationship between the size of pull requests, in terms of lines of code changed, and the amount of time contributors spend reviewing and merging them.
Q: Why is it important to analyze the Effort By Pull Request Batch Size metric? A: Understanding this metric helps optimize the review process by identifying the most efficient batch sizes for pull requests, thus reducing review time and improving workflow efficiency.
Q: How can I interact with the chart to get more insights? A: You can filter the analysis by date range to observe trends over time and compare these trends with previous periods to identify improvements or regressions in efficiency.
Q: Can analyzing this metric reveal trends over specific periods? A: Yes, by filtering the data by specific date ranges, it's possible to observe how the trends in pull request batch sizes and review times have evolved, helping teams adapt their strategies accordingly.