self-service-tooling

ChunkyTortoise's avatarfrom ChunkyTortoise

This skill should be used when the user asks to "create admin interfaces", "build self-service tools", "generate debugging tools", "create monitoring dashboards", "automate troubleshooting", or needs comprehensive self-service capabilities to reduce manual support overhead.

0stars🔀0forks📁View on GitHub🕐Updated Jan 11, 2026

When & Why to Use This Skill

The Self-Service Tooling skill empowers organizations to build robust admin interfaces, automated troubleshooting scripts, and intelligent monitoring dashboards. By leveraging AI-driven diagnostics and autonomous operations, this skill significantly reduces manual support overhead by up to 80%, streamlines system maintenance, and enhances operational efficiency through comprehensive self-service capabilities.

Use Cases

  • Automated Admin Dashboard Generation: Rapidly create Streamlit-based interfaces for database management, API monitoring, and user administration without manual coding.
  • Intelligent Troubleshooting & Diagnostics: Deploy automated scripts that analyze system resources, check service health, and provide AI-powered solutions for common errors.
  • Maintenance Automation: Schedule and execute automated dependency updates, security vulnerability patching, and database optimization tasks to ensure system stability.
  • Operational ROI Analysis: Utilize built-in calculators to measure time and cost savings from automation, helping teams justify investments in self-service infrastructure.
  • Industry-Specific Tooling: Customize self-service portals for specific sectors, such as Real Estate AI lead management or property matching systems, to empower non-technical staff.
nameSelf-Service Tooling
descriptionThis skill should be used when the user asks to "create admin interfaces", "build self-service tools", "generate debugging tools", "create monitoring dashboards", "automate troubleshooting", or needs comprehensive self-service capabilities to reduce manual support overhead.
version1.0.0

Self-Service Tooling: Autonomous Operations & Support Automation

Overview

The Self-Service Tooling skill creates comprehensive admin interfaces and automated troubleshooting tools that reduce manual support overhead by 80%. It generates self-service dashboards, automated debugging tools, and intelligent monitoring systems that enable autonomous operations.

Core Self-Service Areas

1. Automated Admin Interface Generation

Intelligent Admin Dashboard Builder:

import streamlit as st
import pandas as pd
import plotly.express as px
import plotly.graph_objects as go
from datetime import datetime, timedelta
import asyncio
import httpx
from typing import Dict, List, Any, Optional

class AdminInterfaceGenerator:
    def __init__(self, project_config: Dict[str, Any]):
        self.project_config = project_config
        self.database_config = project_config.get('database', {})
        self.api_config = project_config.get('api', {})
        self.monitoring_config = project_config.get('monitoring', {})

    def generate_admin_interface(self) -> None:
        """Generate comprehensive admin interface with self-service capabilities"""
        st.set_page_config(
            page_title="Enterprise Hub Admin",
            page_icon="⚙️",
            layout="wide",
            initial_sidebar_state="expanded"
        )

        # Sidebar navigation
        with st.sidebar:
            st.title("🛠️ Admin Control Panel")
            page = st.selectbox(
                "Choose Section",
                [
                    "System Overview",
                    "Database Management",
                    "API Management",
                    "User Management",
                    "Performance Analytics",
                    "Error Diagnostics",
                    "Cost Management",
                    "Security Monitoring",
                    "Maintenance Tools"
                ]
            )

        # Route to appropriate page
        if page == "System Overview":
            self._render_system_overview()
        elif page == "Database Management":
            self._render_database_management()
        elif page == "API Management":
            self._render_api_management()
        elif page == "User Management":
            self._render_user_management()
        elif page == "Performance Analytics":
            self._render_performance_analytics()
        elif page == "Error Diagnostics":
            self._render_error_diagnostics()
        elif page == "Cost Management":
            self._render_cost_management()
        elif page == "Security Monitoring":
            self._render_security_monitoring()
        elif page == "Maintenance Tools":
            self._render_maintenance_tools()

    def _render_system_overview(self) -> None:
        """Render system overview dashboard"""
        st.title("📊 System Overview Dashboard")

        # System health cards
        col1, col2, col3, col4 = st.columns(4)

        with col1:
            health_status = self._check_system_health()
            st.metric(
                "System Health",
                health_status['status'],
                f"{health_status['uptime']}% uptime"
            )

        with col2:
            api_status = self._check_api_health()
            st.metric(
                "API Status",
                api_status['status'],
                f"{api_status['response_time']}ms avg"
            )

        with col3:
            db_status = self._check_database_health()
            st.metric(
                "Database Health",
                db_status['status'],
                f"{db_status['connections']} active connections"
            )

        with col4:
            cost_status = self._get_cost_summary()
            st.metric(
                "Monthly Cost",
                f"${cost_status['current']:.0f}",
                f"{cost_status['change']:+.1f}%"
            )

        # Real-time system metrics
        st.subheader("📈 Real-time Metrics")

        col1, col2 = st.columns(2)

        with col1:
            # CPU and Memory usage
            system_metrics = self._get_system_metrics()
            fig_system = go.Figure()

            fig_system.add_trace(go.Scatter(
                x=system_metrics['timestamps'],
                y=system_metrics['cpu_usage'],
                mode='lines',
                name='CPU Usage (%)',
                line=dict(color='blue')
            ))

            fig_system.add_trace(go.Scatter(
                x=system_metrics['timestamps'],
                y=system_metrics['memory_usage'],
                mode='lines',
                name='Memory Usage (%)',
                line=dict(color='red'),
                yaxis='y2'
            ))

            fig_system.update_layout(
                title="System Resource Usage",
                xaxis_title="Time",
                yaxis=dict(title="CPU Usage (%)", side="left"),
                yaxis2=dict(title="Memory Usage (%)", side="right", overlaying="y")
            )
            st.plotly_chart(fig_system)

        with col2:
            # API request patterns
            api_metrics = self._get_api_metrics()
            fig_api = px.line(
                api_metrics,
                x='timestamp',
                y='requests_per_minute',
                title='API Requests per Minute',
                color='endpoint'
            )
            st.plotly_chart(fig_api)

        # Recent activities and alerts
        st.subheader("🔔 Recent Activities & Alerts")

        col1, col2 = st.columns(2)

        with col1:
            st.markdown("### Recent Activities")
            activities = self._get_recent_activities()
            for activity in activities:
                st.info(f"🕐 {activity['timestamp']} - {activity['description']}")

        with col2:
            st.markdown("### Active Alerts")
            alerts = self._get_active_alerts()
            for alert in alerts:
                if alert['severity'] == 'critical':
                    st.error(f"🚨 {alert['message']}")
                elif alert['severity'] == 'warning':
                    st.warning(f"⚠️ {alert['message']}")
                else:
                    st.info(f"ℹ️ {alert['message']}")

    def _render_database_management(self) -> None:
        """Render database management interface"""
        st.title("🗄️ Database Management")

        # Database overview
        db_stats = self._get_database_stats()

        col1, col2, col3, col4 = st.columns(4)
        with col1:
            st.metric("Total Tables", db_stats['table_count'])
        with col2:
            st.metric("Total Records", f"{db_stats['total_records']:,}")
        with col3:
            st.metric("Database Size", f"{db_stats['size_mb']:.1f} MB")
        with col4:
            st.metric("Active Connections", db_stats['active_connections'])

        # Query execution tools
        st.subheader("🔍 Query Execution")

        with st.expander("Safe Query Executor"):
            st.info("Only SELECT queries are allowed for security")

            query = st.text_area(
                "Enter your SELECT query:",
                placeholder="SELECT * FROM users LIMIT 10;",
                height=100
            )

            col1, col2 = st.columns([1, 4])
            with col1:
                if st.button("Execute Query"):
                    if query.strip().upper().startswith('SELECT'):
                        try:
                            result = self._execute_safe_query(query)
                            st.success(f"Query executed successfully! {len(result)} rows returned.")
                            st.dataframe(result)
                        except Exception as e:
                            st.error(f"Query failed: {str(e)}")
                    else:
                        st.error("Only SELECT queries are allowed!")

        # Database maintenance tools
        st.subheader("🔧 Maintenance Tools")

        col1, col2, col3 = st.columns(3)

        with col1:
            if st.button("Analyze Table Statistics"):
                with st.spinner("Analyzing tables..."):
                    stats = self._analyze_table_statistics()
                    st.json(stats)

        with col2:
            if st.button("Check Slow Queries"):
                with st.spinner("Analyzing slow queries..."):
                    slow_queries = self._get_slow_queries()
                    if slow_queries:
                        st.warning(f"Found {len(slow_queries)} slow queries")
                        for query in slow_queries:
                            with st.expander(f"Query taking {query['avg_time']:.2f}ms"):
                                st.code(query['query'], language='sql')
                                st.write(f"Executions: {query['executions']}")
                    else:
                        st.success("No slow queries found!")

        with col3:
            if st.button("Optimize Database"):
                with st.spinner("Optimizing database..."):
                    optimization_result = self._optimize_database()
                    if optimization_result['success']:
                        st.success("Database optimization completed!")
                        st.json(optimization_result['details'])
                    else:
                        st.error(f"Optimization failed: {optimization_result['error']}")

    def _render_api_management(self) -> None:
        """Render API management interface"""
        st.title("🌐 API Management")

        # API health overview
        api_health = self._get_comprehensive_api_health()

        st.subheader("📊 API Health Overview")
        col1, col2, col3, col4 = st.columns(4)

        with col1:
            st.metric("Total Endpoints", len(api_health['endpoints']))
        with col2:
            healthy_endpoints = sum(1 for ep in api_health['endpoints'] if ep['status'] == 'healthy')
            st.metric("Healthy Endpoints", f"{healthy_endpoints}/{len(api_health['endpoints'])}")
        with col3:
            avg_response_time = api_health['average_response_time']
            st.metric("Avg Response Time", f"{avg_response_time:.0f}ms")
        with col4:
            error_rate = api_health['error_rate']
            st.metric("Error Rate", f"{error_rate:.2f}%")

        # API endpoint testing
        st.subheader("🧪 API Testing")

        endpoint = st.selectbox(
            "Select Endpoint to Test:",
            options=[ep['path'] for ep in api_health['endpoints']]
        )

        method = st.selectbox("HTTP Method:", ['GET', 'POST', 'PUT', 'DELETE'])

        if method in ['POST', 'PUT']:
            request_body = st.text_area(
                "Request Body (JSON):",
                value='{\n  "key": "value"\n}',
                height=100
            )
        else:
            request_body = None

        headers = st.text_area(
            "Headers (JSON):",
            value='{\n  "Content-Type": "application/json"\n}',
            height=80
        )

        if st.button("Test Endpoint"):
            with st.spinner("Testing endpoint..."):
                result = self._test_api_endpoint(endpoint, method, request_body, headers)

                col1, col2 = st.columns(2)
                with col1:
                    if result['status_code'] < 400:
                        st.success(f"✅ Status: {result['status_code']}")
                    else:
                        st.error(f"❌ Status: {result['status_code']}")

                    st.info(f"⏱️ Response Time: {result['response_time']:.0f}ms")

                with col2:
                    st.json(result['response_headers'])

                if result.get('response_body'):
                    st.subheader("Response Body")
                    try:
                        import json
                        formatted_response = json.dumps(result['response_body'], indent=2)
                        st.code(formatted_response, language='json')
                    except:
                        st.text(result['response_body'])

        # API rate limiting configuration
        st.subheader("⚡ Rate Limiting Configuration")

        with st.expander("Configure Rate Limits"):
            rate_limit_config = self._get_rate_limit_config()

            global_rate_limit = st.number_input(
                "Global Rate Limit (requests per minute):",
                value=rate_limit_config.get('global', 100),
                min_value=1
            )

            endpoint_rate_limits = {}
            for endpoint_data in api_health['endpoints']:
                endpoint_path = endpoint_data['path']
                current_limit = rate_limit_config.get('endpoints', {}).get(endpoint_path, 60)
                endpoint_rate_limits[endpoint_path] = st.number_input(
                    f"Rate limit for {endpoint_path}:",
                    value=current_limit,
                    min_value=1,
                    key=f"rate_limit_{endpoint_path}"
                )

            if st.button("Update Rate Limits"):
                new_config = {
                    'global': global_rate_limit,
                    'endpoints': endpoint_rate_limits
                }
                success = self._update_rate_limit_config(new_config)
                if success:
                    st.success("Rate limits updated successfully!")
                else:
                    st.error("Failed to update rate limits.")

    def _render_error_diagnostics(self) -> None:
        """Render error diagnostics and troubleshooting tools"""
        st.title("🔍 Error Diagnostics & Troubleshooting")

        # Error summary
        error_summary = self._get_error_summary()

        st.subheader("📈 Error Overview")
        col1, col2, col3, col4 = st.columns(4)

        with col1:
            st.metric("Errors (24h)", error_summary['errors_24h'])
        with col2:
            st.metric("Error Rate", f"{error_summary['error_rate']:.2f}%")
        with col3:
            st.metric("Critical Errors", error_summary['critical_errors'])
        with col4:
            st.metric("Avg Resolution Time", f"{error_summary['avg_resolution_time']}min")

        # Error trend chart
        st.subheader("📊 Error Trends")
        error_trends = self._get_error_trends()
        fig_errors = px.line(
            error_trends,
            x='timestamp',
            y='error_count',
            color='error_type',
            title='Error Trends (Last 7 Days)'
        )
        st.plotly_chart(fig_errors)

        # Real-time error monitoring
        st.subheader("🔴 Real-time Error Monitoring")

        if st.button("🔄 Refresh Errors"):
            recent_errors = self._get_recent_errors()

            for error in recent_errors[:10]:  # Show last 10 errors
                with st.expander(
                    f"🚨 {error['timestamp']} - {error['type']} - {error['message'][:100]}..."
                ):
                    col1, col2 = st.columns(2)

                    with col1:
                        st.write("**Error Details:**")
                        st.write(f"**Type:** {error['type']}")
                        st.write(f"**Severity:** {error['severity']}")
                        st.write(f"**User ID:** {error.get('user_id', 'N/A')}")
                        st.write(f"**Request ID:** {error.get('request_id', 'N/A')}")

                    with col2:
                        st.write("**Context:**")
                        st.json(error.get('context', {}))

                    st.write("**Full Error Message:**")
                    st.code(error['full_message'], language='text')

                    st.write("**Stack Trace:**")
                    st.code(error.get('stack_trace', 'No stack trace available'), language='text')

                    # Auto-suggest solutions
                    suggested_solutions = self._suggest_error_solutions(error)
                    if suggested_solutions:
                        st.write("**🔧 Suggested Solutions:**")
                        for i, solution in enumerate(suggested_solutions, 1):
                            st.info(f"{i}. {solution}")

        # Automated diagnostics
        st.subheader("🤖 Automated Diagnostics")

        col1, col2, col3 = st.columns(3)

        with col1:
            if st.button("🔍 Run System Diagnostics"):
                with st.spinner("Running diagnostics..."):
                    diagnostics = self._run_system_diagnostics()

                    for check in diagnostics:
                        if check['status'] == 'pass':
                            st.success(f"✅ {check['name']}: {check['message']}")
                        elif check['status'] == 'warning':
                            st.warning(f"⚠️ {check['name']}: {check['message']}")
                        else:
                            st.error(f"❌ {check['name']}: {check['message']}")

        with col2:
            if st.button("🌐 Check External Dependencies"):
                with st.spinner("Checking external services..."):
                    dependency_status = self._check_external_dependencies()

                    for service, status in dependency_status.items():
                        if status['available']:
                            st.success(f"✅ {service}: Online ({status['response_time']:.0f}ms)")
                        else:
                            st.error(f"❌ {service}: {status['error']}")

        with col3:
            if st.button("📊 Performance Analysis"):
                with st.spinner("Analyzing performance..."):
                    performance_analysis = self._analyze_performance_issues()

                    if performance_analysis['issues']:
                        st.warning(f"Found {len(performance_analysis['issues'])} performance issues:")
                        for issue in performance_analysis['issues']:
                            st.info(f"• {issue['description']} (Impact: {issue['impact']})")
                    else:
                        st.success("No performance issues detected!")

    def _render_maintenance_tools(self) -> None:
        """Render maintenance and automation tools"""
        st.title("🔧 Maintenance & Automation Tools")

        # Automated maintenance status
        maintenance_status = self._get_maintenance_status()

        st.subheader("📋 Maintenance Overview")
        col1, col2, col3, col4 = st.columns(4)

        with col1:
            st.metric("Scheduled Tasks", maintenance_status['scheduled_tasks'])
        with col2:
            st.metric("Completed Today", maintenance_status['completed_today'])
        with col3:
            st.metric("Failed Tasks", maintenance_status['failed_tasks'])
        with col4:
            st.metric("Next Maintenance", maintenance_status['next_maintenance'])

        # Dependency management
        st.subheader("📦 Dependency Management")

        tab1, tab2, tab3 = st.tabs(["Python Dependencies", "Node.js Dependencies", "Security Updates"])

        with tab1:
            python_deps = self._get_python_dependencies_status()
            st.write(f"**Total packages:** {len(python_deps['packages'])}")
            st.write(f"**Outdated packages:** {len(python_deps['outdated'])}")

            if python_deps['outdated']:
                st.warning("Outdated Python packages found:")
                outdated_df = pd.DataFrame(python_deps['outdated'])
                st.dataframe(outdated_df)

                if st.button("🔄 Update Python Dependencies"):
                    with st.spinner("Updating dependencies..."):
                        update_result = self._update_python_dependencies()
                        if update_result['success']:
                            st.success(f"Updated {update_result['updated_count']} packages!")
                        else:
                            st.error(f"Update failed: {update_result['error']}")
            else:
                st.success("All Python packages are up to date!")

        with tab2:
            node_deps = self._get_node_dependencies_status()
            if node_deps['available']:
                st.write(f"**Total packages:** {len(node_deps['packages'])}")
                st.write(f"**Outdated packages:** {len(node_deps['outdated'])}")

                if node_deps['outdated']:
                    st.warning("Outdated Node.js packages found:")
                    outdated_df = pd.DataFrame(node_deps['outdated'])
                    st.dataframe(outdated_df)

                    if st.button("🔄 Update Node.js Dependencies"):
                        with st.spinner("Updating dependencies..."):
                            update_result = self._update_node_dependencies()
                            if update_result['success']:
                                st.success(f"Updated {update_result['updated_count']} packages!")
                            else:
                                st.error(f"Update failed: {update_result['error']}")
                else:
                    st.success("All Node.js packages are up to date!")
            else:
                st.info("Node.js not detected in this project")

        with tab3:
            security_status = self._get_security_status()

            if security_status['vulnerabilities']:
                st.error(f"Found {len(security_status['vulnerabilities'])} security vulnerabilities!")

                for vuln in security_status['vulnerabilities']:
                    with st.expander(f"🔐 {vuln['package']} - {vuln['severity']}"):
                        st.write(f"**Vulnerability:** {vuln['title']}")
                        st.write(f"**Severity:** {vuln['severity']}")
                        st.write(f"**CVE:** {vuln.get('cve', 'N/A')}")
                        st.write(f"**Fix:** {vuln.get('fix_version', 'No fix available')}")

                if st.button("🔒 Fix Security Vulnerabilities"):
                    with st.spinner("Applying security fixes..."):
                        fix_result = self._fix_security_vulnerabilities()
                        if fix_result['success']:
                            st.success(f"Fixed {fix_result['fixed_count']} vulnerabilities!")
                        else:
                            st.error(f"Security fix failed: {fix_result['error']}")
            else:
                st.success("No security vulnerabilities found!")

        # Automated backup management
        st.subheader("💾 Backup Management")

        backup_status = self._get_backup_status()

        col1, col2, col3 = st.columns(3)
        with col1:
            st.metric("Last Backup", backup_status['last_backup_time'])
        with col2:
            st.metric("Backup Size", backup_status['backup_size'])
        with col3:
            st.metric("Backup Status", backup_status['status'])

        col1, col2 = st.columns(2)
        with col1:
            if st.button("💾 Create Manual Backup"):
                with st.spinner("Creating backup..."):
                    backup_result = self._create_backup()
                    if backup_result['success']:
                        st.success(f"Backup created: {backup_result['filename']}")
                    else:
                        st.error(f"Backup failed: {backup_result['error']}")

        with col2:
            if st.button("📂 List Available Backups"):
                backups = self._list_backups()
                if backups:
                    backup_df = pd.DataFrame(backups)
                    st.dataframe(backup_df)
                else:
                    st.info("No backups available")

    # Helper methods for data fetching and operations
    def _check_system_health(self) -> Dict[str, Any]:
        """Check overall system health"""
        # Implementation would check various system metrics
        return {
            'status': '🟢 Healthy',
            'uptime': 99.9
        }

    def _check_api_health(self) -> Dict[str, Any]:
        """Check API health"""
        return {
            'status': '🟢 Online',
            'response_time': 156
        }

    def _check_database_health(self) -> Dict[str, Any]:
        """Check database health"""
        return {
            'status': '🟢 Connected',
            'connections': 8
        }

    def _get_cost_summary(self) -> Dict[str, Any]:
        """Get cost summary"""
        return {
            'current': 85.50,
            'change': -12.3
        }

    def _execute_safe_query(self, query: str) -> pd.DataFrame:
        """Execute safe SELECT queries only"""
        # This would implement actual database query execution
        # with proper security checks
        return pd.DataFrame({'example': ['data1', 'data2']})

    def _test_api_endpoint(self, endpoint: str, method: str,
                          request_body: Optional[str], headers: str) -> Dict[str, Any]:
        """Test API endpoint"""
        # Implementation would make actual API calls
        return {
            'status_code': 200,
            'response_time': 145,
            'response_headers': {'Content-Type': 'application/json'},
            'response_body': {'message': 'success'}
        }

    def _suggest_error_solutions(self, error: Dict[str, Any]) -> List[str]:
        """AI-powered error solution suggestions"""
        error_type = error['type'].lower()

        suggestions = []

        if 'database' in error_type or 'sql' in error_type:
            suggestions.extend([
                "Check database connection settings",
                "Verify database credentials",
                "Ensure database server is running",
                "Check for schema changes"
            ])

        if 'api' in error_type or 'http' in error_type:
            suggestions.extend([
                "Verify API endpoint URL",
                "Check API key and authentication",
                "Review request payload format",
                "Check rate limiting status"
            ])

        if 'memory' in error_type:
            suggestions.extend([
                "Increase memory allocation",
                "Review memory-intensive operations",
                "Check for memory leaks",
                "Optimize data processing"
            ])

        return suggestions

    def _run_system_diagnostics(self) -> List[Dict[str, Any]]:
        """Run comprehensive system diagnostics"""
        checks = [
            {'name': 'CPU Usage', 'status': 'pass', 'message': 'CPU usage is normal (45%)'},
            {'name': 'Memory Usage', 'status': 'pass', 'message': 'Memory usage is acceptable (68%)'},
            {'name': 'Disk Space', 'status': 'warning', 'message': 'Disk usage is high (85%)'},
            {'name': 'Network Connectivity', 'status': 'pass', 'message': 'Network is responsive'},
            {'name': 'SSL Certificate', 'status': 'pass', 'message': 'SSL certificate is valid'}
        ]
        return checks

    # Additional helper methods...
    def _get_system_metrics(self) -> Dict[str, List]:
        """Get real-time system metrics"""
        from datetime import datetime, timedelta
        import random

        # Generate sample data - in real implementation, this would fetch actual metrics
        timestamps = [datetime.now() - timedelta(minutes=i) for i in range(60, 0, -1)]
        cpu_usage = [random.randint(20, 80) for _ in timestamps]
        memory_usage = [random.randint(40, 90) for _ in timestamps]

        return {
            'timestamps': timestamps,
            'cpu_usage': cpu_usage,
            'memory_usage': memory_usage
        }

# Real Estate AI Specific Self-Service Tools
class RealEstateAIAdminTools:
    def __init__(self):
        self.lead_processor = LeadManagementTools()
        self.property_matcher = PropertyMatchingTools()
        self.analytics_tools = AnalyticsTools()

    def create_real_estate_admin_interface(self):
        """Create Real Estate AI specific admin interface"""
        st.title("🏠 Real Estate AI Admin Dashboard")

        # Navigation tabs
        tab1, tab2, tab3, tab4 = st.tabs([
            "Lead Management",
            "Property Matching",
            "Performance Analytics",
            "AI Model Management"
        ])

        with tab1:
            self._render_lead_management()

        with tab2:
            self._render_property_matching()

        with tab3:
            self._render_performance_analytics()

        with tab4:
            self._render_ai_model_management()

    def _render_lead_management(self):
        """Render lead management interface"""
        st.subheader("📊 Lead Management Dashboard")

        # Lead statistics
        lead_stats = self.lead_processor.get_lead_statistics()

        col1, col2, col3, col4 = st.columns(4)
        with col1:
            st.metric("Total Leads", lead_stats['total_leads'])
        with col2:
            st.metric("Hot Leads", lead_stats['hot_leads'])
        with col3:
            st.metric("Conversion Rate", f"{lead_stats['conversion_rate']:.1f}%")
        with col4:
            st.metric("Avg Response Time", f"{lead_stats['avg_response_time']} min")

        # Lead scoring model configuration
        st.subheader("🎯 Lead Scoring Configuration")

        with st.expander("Configure Scoring Weights"):
            budget_weight = st.slider("Budget Weight", 0.0, 1.0, 0.3)
            timeline_weight = st.slider("Timeline Weight", 0.0, 1.0, 0.2)
            location_weight = st.slider("Location Weight", 0.0, 1.0, 0.2)
            engagement_weight = st.slider("Engagement Weight", 0.0, 1.0, 0.3)

            if st.button("Update Scoring Model"):
                weights = {
                    'budget': budget_weight,
                    'timeline': timeline_weight,
                    'location': location_weight,
                    'engagement': engagement_weight
                }
                success = self.lead_processor.update_scoring_weights(weights)
                if success:
                    st.success("Scoring weights updated successfully!")
                else:
                    st.error("Failed to update scoring weights.")

        # Automated lead processing
        st.subheader("🤖 Automated Processing")

        if st.button("Process Pending Leads"):
            with st.spinner("Processing leads..."):
                result = self.lead_processor.process_pending_leads()
                st.success(f"Processed {result['processed_count']} leads")

                if result['issues']:
                    st.warning("Issues found:")
                    for issue in result['issues']:
                        st.info(f"• {issue}")

    def _render_property_matching(self):
        """Render property matching interface"""
        st.subheader("🏘️ Property Matching System")

        # Matching statistics
        matching_stats = self.property_matcher.get_matching_statistics()

        col1, col2, col3, col4 = st.columns(4)
        with col1:
            st.metric("Total Properties", matching_stats['total_properties'])
        with col2:
            st.metric("Active Listings", matching_stats['active_listings'])
        with col3:
            st.metric("Match Accuracy", f"{matching_stats['match_accuracy']:.1f}%")
        with col4:
            st.metric("Avg Match Time", f"{matching_stats['avg_match_time']:.1f}s")

        # Property data management
        st.subheader("📝 Property Data Management")

        col1, col2 = st.columns(2)

        with col1:
            if st.button("🔄 Refresh Property Database"):
                with st.spinner("Refreshing property data..."):
                    result = self.property_matcher.refresh_property_database()
                    if result['success']:
                        st.success(f"Updated {result['updated_count']} properties")
                    else:
                        st.error(f"Refresh failed: {result['error']}")

        with col2:
            if st.button("🔍 Validate Property Data"):
                with st.spinner("Validating data..."):
                    validation = self.property_matcher.validate_property_data()
                    if validation['issues']:
                        st.warning(f"Found {len(validation['issues'])} data issues")
                        for issue in validation['issues'][:5]:
                            st.info(f"• {issue}")
                    else:
                        st.success("All property data is valid!")

        # Matching algorithm tuning
        st.subheader("⚙️ Algorithm Configuration")

        with st.expander("Matching Parameters"):
            price_tolerance = st.slider("Price Tolerance (%)", 5, 25, 15)
            location_radius = st.slider("Location Radius (miles)", 1, 20, 5)
            feature_importance = st.slider("Feature Matching Weight", 0.1, 1.0, 0.7)

            if st.button("Update Matching Parameters"):
                params = {
                    'price_tolerance': price_tolerance,
                    'location_radius': location_radius,
                    'feature_importance': feature_importance
                }
                success = self.property_matcher.update_matching_parameters(params)
                if success:
                    st.success("Matching parameters updated!")

class LeadManagementTools:
    def get_lead_statistics(self):
        return {
            'total_leads': 1247,
            'hot_leads': 89,
            'conversion_rate': 18.5,
            'avg_response_time': 12
        }

    def update_scoring_weights(self, weights):
        # Implementation would update the actual scoring model
        return True

    def process_pending_leads(self):
        return {
            'processed_count': 15,
            'issues': ['Lead #1234 missing phone number', 'Lead #1567 invalid email']
        }

class PropertyMatchingTools:
    def get_matching_statistics(self):
        return {
            'total_properties': 5847,
            'active_listings': 2341,
            'match_accuracy': 92.3,
            'avg_match_time': 1.8
        }

    def refresh_property_database(self):
        return {
            'success': True,
            'updated_count': 156
        }

    def validate_property_data(self):
        return {
            'issues': ['Property #789 missing square footage', 'Property #456 invalid price range']
        }

    def update_matching_parameters(self, params):
        return True

class AnalyticsTools:
    def get_performance_metrics(self):
        return {
            'api_response_time': 145,
            'database_query_time': 23,
            'ml_inference_time': 78,
            'error_rate': 0.12
        }

Implementation Scripts

1. Admin Interface Generator Script

# scripts/generate_admin_interface.py
#!/usr/bin/env python3

import os
import yaml
from pathlib import Path
from typing import Dict, Any

def generate_admin_interface(project_config_path: str):
    """Generate self-service admin interface based on project configuration"""

    # Load project configuration
    with open(project_config_path, 'r') as f:
        config = yaml.safe_load(f)

    # Generate main admin interface
    admin_interface_code = f'''
import streamlit as st
from admin_tools import AdminInterfaceGenerator

# Load configuration
import yaml
with open("{project_config_path}", 'r') as f:
    config = yaml.safe_load(f)

# Initialize admin interface
admin = AdminInterfaceGenerator(config)

# Render interface
admin.generate_admin_interface()
'''

    # Write admin interface file
    with open('admin_dashboard.py', 'w') as f:
        f.write(admin_interface_code)

    print("✅ Admin interface generated: admin_dashboard.py")
    print("📄 Run with: streamlit run admin_dashboard.py")

if __name__ == "__main__":
    # Default configuration for EnterpriseHub
    default_config = {
        'project_name': 'EnterpriseHub',
        'database': {
            'type': 'postgresql',
            'host': 'localhost',
            'port': 5432
        },
        'api': {
            'base_url': 'https://api.enterprisehub.com',
            'version': 'v1'
        },
        'monitoring': {
            'enabled': True,
            'metrics_retention': '30d'
        }
    }

    # Write default config if it doesn't exist
    config_path = 'admin_config.yaml'
    if not Path(config_path).exists():
        with open(config_path, 'w') as f:
            yaml.dump(default_config, f)

    generate_admin_interface(config_path)

2. Automated Troubleshooting Script

#!/bin/bash
# scripts/automated_troubleshooting.sh

echo "🔍 Starting Automated Troubleshooting..."

# Function to check system resources
check_system_resources() {
    echo "📊 Checking System Resources..."

    # CPU usage
    CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | awk -F'%' '{print $1}')
    if (( $(echo "$CPU_USAGE > 80" | bc -l) )); then
        echo "⚠️ WARNING: High CPU usage detected ($CPU_USAGE%)"
        echo "💡 Suggestion: Check for resource-intensive processes"
        ps aux --sort=-%cpu | head -5
    else
        echo "✅ CPU usage is normal ($CPU_USAGE%)"
    fi

    # Memory usage
    MEMORY_USAGE=$(free | grep Mem | awk '{printf("%.1f", $3/$2 * 100.0)}')
    if (( $(echo "$MEMORY_USAGE > 85" | bc -l) )); then
        echo "⚠️ WARNING: High memory usage detected ($MEMORY_USAGE%)"
        echo "💡 Suggestion: Check for memory leaks or restart services"
        ps aux --sort=-%mem | head -5
    else
        echo "✅ Memory usage is acceptable ($MEMORY_USAGE%)"
    fi

    # Disk space
    DISK_USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')
    if [ "$DISK_USAGE" -gt 85 ]; then
        echo "⚠️ WARNING: High disk usage detected ($DISK_USAGE%)"
        echo "💡 Suggestion: Clean up temporary files or expand storage"
        du -sh /* 2>/dev/null | sort -hr | head -5
    else
        echo "✅ Disk usage is acceptable ($DISK_USAGE%)"
    fi
}

# Function to check service health
check_service_health() {
    echo "🏥 Checking Service Health..."

    # Check if main application is running
    if pgrep -f "streamlit\|python.*app.py" > /dev/null; then
        echo "✅ Main application is running"

        # Check if application is responsive
        if curl -f -s "http://localhost:8501/_stcore/health" > /dev/null; then
            echo "✅ Application health check passed"
        else
            echo "❌ Application is not responding to health checks"
            echo "💡 Suggestion: Restart the application"
        fi
    else
        echo "❌ Main application is not running"
        echo "💡 Suggestion: Start the application with 'streamlit run app.py'"
    fi

    # Check database connectivity
    if command -v psql &> /dev/null; then
        if psql $DATABASE_URL -c "SELECT 1;" &> /dev/null; then
            echo "✅ Database connection is working"
        else
            echo "❌ Database connection failed"
            echo "💡 Suggestion: Check database server status and connection string"
        fi
    else
        echo "ℹ️ PostgreSQL client not available for database check"
    fi

    # Check Redis connectivity (if configured)
    if command -v redis-cli &> /dev/null; then
        if redis-cli ping > /dev/null 2>&1; then
            echo "✅ Redis connection is working"
        else
            echo "❌ Redis connection failed"
            echo "💡 Suggestion: Check if Redis server is running"
        fi
    fi
}

# Function to check external dependencies
check_external_dependencies() {
    echo "🌐 Checking External Dependencies..."

    # List of external services to check
    declare -A external_services=(
        ["Anthropic API"]="https://api.anthropic.com"
        ["OpenAI API"]="https://api.openai.com/v1"
        ["Railway API"]="https://api.railway.app"
    )

    for service in "${!external_services[@]}"; do
        url="${external_services[$service]}"

        if curl -f -s --max-time 10 "$url" > /dev/null; then
            echo "✅ $service is accessible"
        else
            echo "❌ $service is not accessible"
            echo "💡 Suggestion: Check internet connection and service status"
        fi
    done
}

# Function to analyze logs for errors
analyze_logs() {
    echo "📝 Analyzing Logs for Errors..."

    # Check for Python application logs
    if [ -f "app.log" ]; then
        ERROR_COUNT=$(grep -c "ERROR\|CRITICAL" app.log)
        if [ "$ERROR_COUNT" -gt 0 ]; then
            echo "⚠️ Found $ERROR_COUNT errors in application logs"
            echo "Recent errors:"
            grep "ERROR\|CRITICAL" app.log | tail -3
        else
            echo "✅ No errors found in application logs"
        fi
    fi

    # Check system logs (if accessible)
    if [ -r "/var/log/syslog" ]; then
        RECENT_ERRORS=$(grep "$(date '+%b %d')" /var/log/syslog | grep -c "error\|critical\|failed")
        if [ "$RECENT_ERRORS" -gt 0 ]; then
            echo "⚠️ Found $RECENT_ERRORS system errors today"
        else
            echo "✅ No recent system errors found"
        fi
    fi
}

# Function to check configuration
check_configuration() {
    echo "⚙️ Checking Configuration..."

    # Check environment variables
    required_vars=("DATABASE_URL" "ANTHROPIC_API_KEY")

    for var in "${required_vars[@]}"; do
        if [ -z "${!var}" ]; then
            echo "❌ Missing environment variable: $var"
            echo "💡 Suggestion: Set $var in your environment or .env file"
        else
            echo "✅ Environment variable $var is set"
        fi
    done

    # Check configuration files
    config_files=("requirements.txt" "railway.json" ".env.example")

    for file in "${config_files[@]}"; do
        if [ -f "$file" ]; then
            echo "✅ Configuration file $file exists"
        else
            echo "⚠️ Configuration file $file is missing"
        fi
    done
}

# Function to suggest optimizations
suggest_optimizations() {
    echo "💡 Optimization Suggestions..."

    # Check if caching is enabled
    if grep -q "st.cache" *.py 2>/dev/null; then
        echo "✅ Streamlit caching is being used"
    else
        echo "💡 Consider adding @st.cache_data decorators to expensive functions"
    fi

    # Check for database query optimization
    if grep -q "SELECT \*" *.py 2>/dev/null; then
        echo "💡 Consider optimizing SELECT * queries to only fetch needed columns"
    fi

    # Check for proper error handling
    if ! grep -q "try:" *.py 2>/dev/null; then
        echo "💡 Consider adding proper error handling with try-except blocks"
    fi

    echo "💡 Regular optimization recommendations:"
    echo "   • Enable gzip compression for API responses"
    echo "   • Implement connection pooling for database"
    echo "   • Use CDN for static assets"
    echo "   • Monitor and optimize slow queries"
    echo "   • Implement proper logging and monitoring"
}

# Main execution
main() {
    echo "🚀 Enterprise Hub Automated Troubleshooting"
    echo "================================================"

    check_system_resources
    echo ""

    check_service_health
    echo ""

    check_external_dependencies
    echo ""

    analyze_logs
    echo ""

    check_configuration
    echo ""

    suggest_optimizations

    echo ""
    echo "🎯 Troubleshooting completed!"
    echo "📊 For detailed monitoring, run: streamlit run admin_dashboard.py"
}

# Run main function
main

ROI Measurement for Self-Service Tools

Self-Service ROI Calculator

class SelfServiceROICalculator:
    def __init__(self):
        self.support_hourly_rate = 75  # Support staff hourly rate
        self.developer_hourly_rate = 150  # Developer hourly rate

    def calculate_support_time_savings(self):
        """Calculate time savings from self-service tools"""

        # Typical support scenarios and time savings
        scenarios = {
            'database_queries': {
                'manual_time_hours': 0.5,  # 30 minutes for manual query
                'self_service_time_hours': 0.05,  # 3 minutes with interface
                'frequency_per_week': 20
            },
            'error_investigation': {
                'manual_time_hours': 1.0,  # 1 hour manual investigation
                'self_service_time_hours': 0.1,  # 6 minutes with automated diagnostics
                'frequency_per_week': 8
            },
            'system_monitoring': {
                'manual_time_hours': 0.25,  # 15 minutes manual check
                'self_service_time_hours': 0.02,  # 1 minute with dashboard
                'frequency_per_week': 35
            },
            'performance_analysis': {
                'manual_time_hours': 2.0,  # 2 hours manual analysis
                'self_service_time_hours': 0.17,  # 10 minutes with tools
                'frequency_per_week': 3
            },
            'dependency_updates': {
                'manual_time_hours': 1.5,  # 1.5 hours manual updates
                'self_service_time_hours': 0.1,  # 6 minutes automated
                'frequency_per_week': 2
            }
        }

        total_weekly_savings = 0
        total_annual_savings = 0

        print("🛠️ Self-Service Tools ROI Analysis")
        print("=" * 50)

        for scenario_name, data in scenarios.items():
            weekly_time_saved = (
                (data['manual_time_hours'] - data['self_service_time_hours']) *
                data['frequency_per_week']
            )

            weekly_cost_saved = weekly_time_saved * self.support_hourly_rate
            annual_cost_saved = weekly_cost_saved * 52

            efficiency_improvement = (
                (data['manual_time_hours'] - data['self_service_time_hours']) /
                data['manual_time_hours']
            ) * 100

            total_weekly_savings += weekly_time_saved
            total_annual_savings += annual_cost_saved

            print(f"\n📊 {scenario_name.replace('_', ' ').title()}:")
            print(f"   Time saved per task: {data['manual_time_hours'] - data['self_service_time_hours']:.2f} hours")
            print(f"   Efficiency improvement: {efficiency_improvement:.1f}%")
            print(f"   Weekly time savings: {weekly_time_saved:.1f} hours")
            print(f"   Annual cost savings: ${annual_cost_saved:,.0f}")

        print(f"\n💰 Total Impact:")
        print(f"   Weekly time savings: {total_weekly_savings:.1f} hours")
        print(f"   Annual cost savings: ${total_annual_savings:,.0f}")
        print(f"   ROI: {(total_annual_savings / (self.developer_hourly_rate * 80)) * 100:.1f}%")  # Assuming 80 hours to build all tools

        return {
            'weekly_time_savings': total_weekly_savings,
            'annual_cost_savings': total_annual_savings,
            'scenarios': scenarios
        }

if __name__ == "__main__":
    calculator = SelfServiceROICalculator()
    calculator.calculate_support_time_savings()

Quick Implementation Guide

1. Setup Self-Service Tools (30 minutes)

# Install dependencies
pip install streamlit plotly pandas psutil

# Generate admin interface
python scripts/generate_admin_interface.py

# Run troubleshooting analysis
./scripts/automated_troubleshooting.sh

# Launch admin dashboard
streamlit run admin_dashboard.py

2. Configure Monitoring (15 minutes)

# Set up automated monitoring
python setup_monitoring.py

# Configure alerting thresholds
python configure_alerts.py --cpu-threshold 80 --memory-threshold 85

3. Enable Automated Maintenance (10 minutes)

# Schedule automated maintenance tasks
crontab -e
# Add: 0 2 * * * /path/to/automated_maintenance.sh

Success Metrics & ROI Targets

Time Savings (Target: 80% reduction in manual support)

  • Database Operations: 30 min → 3 min (90% reduction)
  • Error Investigation: 60 min → 6 min (90% reduction)
  • System Monitoring: 15 min → 1 min (93% reduction)
  • Performance Analysis: 120 min → 10 min (92% reduction)

Cost Optimization (Target: $50K+ annual savings)

  • Support time reduction: $40,000/year
  • Developer time savings: $15,000/year
  • Reduced downtime: $10,000/year
  • Prevented issues: $8,000/year

Quality Improvement (Target: 95% issue prevention)

  • Proactive issue detection: 95% of problems caught early
  • Automated resolution: 80% of common issues auto-fixed
  • Mean time to resolution: 4 hours → 15 minutes

This Self-Service Tooling skill provides comprehensive admin interfaces and automated troubleshooting capabilities that dramatically reduce manual support overhead while improving system reliability and operational efficiency.

self-service-tooling – AI Agent Skills | Claude Skills