aya-electron安卓工具箱

2

有了deepwiki,分析项目从未如此简单

简介

基于scrcpy的屏幕同步、基于adb的文件管理、性能监控、屏幕控制、布局检查、log查看、shell。

功能特性分析

scrcpy的使用

  1. electron的主进程将scrcpy.jar推送到安卓手机,然后adb命令启动

  async push() {
    logger.info('push')

    const device = client.getDevice(this.deviceId)
    await device.push(
      resolveUnpack('server/scrcpy.jar'),
      '/data/local/tmp/aya/scrcpy.jar'
    )
  }
  
  
   const socket = await device.shell(
      `CLASSPATH=/data/local/tmp/aya/scrcpy.jar app_process /system/bin com.genymobile.scrcpy.Server 3.1 ${args.join(
        ' '
      )}`
    )
  1. 在渲染进程建立tcp连接,连接到scrcpy服务

  • 视频流

  • 音频流

  • 控制流

# 反向tcp隧道连接设备
const port = await main.reverseTcp(
      this.deviceId,
      `localabstract:scrcpy_${lpad(options.value.scid as string, 8, '0')}`
    )
    
# 
 const server = node.createServer(async (socket) => {
      // video socket is the first connection
      if (!this.video) {
        this.video = {}
        this.createVideo(socketToReadableStream(socket))
        socket.on('close', () => this.emit('close'))
        return
      }

      // audio socket and control socket orders are not guaranteed, need to detect
      let isAudio = false
      this.detectAudioStream(socketToReadableStream(socket)).then((value) => {
        // never resolve if it's control socket
        if (value.audio) {
          isAudio = true
          this.createAudio(value.stream)
        }
      })
      sleep(1000).then(() => {
        if (!isAudio) {
          this.createControl(socketToWritableStream(socket))
        }
      })
    })

tcp连接是怎么建立的

  • 通过adb简历从设备到本地的反向端口映射

    • 反向tcp可以绕过设备的防火墙限制,因为是设备请求aya开启的服务

    • 这里用到了adb reverse: 允许你在你的电脑(宿主机)上创建一个监听端口,并将所有发送到该端口的数据转发到连接的Android设备或模拟器上的另一个端口 作用就是让设备访问电脑上的服务

  • 渲染进程创建一个服务器监听端口,然后启动设备上的scrcpy服务

  • 首先创建视频流,然后创建音频流和控制流,通过数据包头部来进行区分

image.png

视频处理

接受h264视频,使用webcodecs编码 渲染到html video标签里面

 private async createVideo(videoStream: ReadableStream<Uint8Array>) {
    logger.info('video stream connected')

    const { options } = this

    const { stream, metadata } = await options.parseVideoStreamMetadata(
      videoStream
    )

    logger.info('video metadata', metadata)

    let codec: any = 0
    switch (metadata.codec) {
      case ScrcpyVideoCodecId.H264:
        codec = ScrcpyVideoCodecId.H264
        break
    }
    const renderer = new InsertableStreamVideoFrameRenderer()
    const decoder = new WebCodecsVideoDecoder({
      codec,
      renderer,
    })

    this.video = {
      stream: stream
        .pipeThrough(options.createMediaStreamTransformer())
        .pipeThrough(
          new InspectStream((packet) => {
            if (packet.type === 'configuration') {
              logger.info('video configuration', packet.data)
            }
          })
        ),
      metadata,
      decoder,
    }

    this.bindVideoEvent(renderer.element)

    logger.info('video ready')
    this.readiness.signal('video')
  }

音频处理

检测音频解码器,解码后通过pcm播放器播放

private async detectAudioStream(stream: ReadableStream<Uint8Array>) {
    let isAudio = false

    const buffered = new BufferedReadableStream(stream)
    const buffer = await buffered.readExactly(4)
    const codecMetadataValue = getUint32BigEndian(buffer, 0)

    const readableStream = new PushReadableStream<Uint8Array>(
      async (controller) => {
        await controller.enqueue(buffer)

        const stream = buffered.release()
        const reader = stream.getReader()
        while (true) {
          const { done, value } = await reader.read()
          if (done) {
            break
          }
          await controller.enqueue(value)
        }
      }
    )
    switch (codecMetadataValue) {
      case 0x00_00_00_00:
      case ScrcpyAudioCodec.Opus.metadataValue:
        isAudio = true
        break
    }

    if (isAudio) {
      return {
        audio: true,
        stream: readableStream,
      }
    }

    return {
      audio: false,
      stream: readableStream,
    }
  }

用户交互

检测用户触摸事件 injectTouch、键盘事件injectCode、滚轮事件injectWheel

private injectTouch(el: HTMLVideoElement, e: PointerEvent) {
    e.preventDefault()
    e.stopPropagation()

    const target = e.currentTarget as HTMLElement
    target.setPointerCapture(e.pointerId)

    const { type, clientX, clientY, button, buttons } = e

    let action: AndroidMotionEventAction
    switch (type) {
      case 'pointerdown':
        action = AndroidMotionEventAction.Down
        break
      case 'pointermove':
        if (buttons === 0) {
          action = AndroidMotionEventAction.HoverMove
        } else {
          action = AndroidMotionEventAction.Move
        }
        break
      case 'pointerup':
        action = AndroidMotionEventAction.Up
        break
      default:
        throw new Error(`Unsupported event type: ${type}`)
    }

    if (this.control) {
      const controller: ScrcpyControlMessageWriter = this.control.controller
      controller.injectTouch({
        action,
        pointerId: BigInt(e.pointerId),
        ...this.getPointer(el, clientX, clientY),
        pressure: buttons === 0 ? 0 : 1,
        actionButton: PointerEventButtonToAndroidButton[button],
        buttons,
      })
    }
  }
  
  
   private injectKeyCode(e: KeyboardEvent) {
    e.preventDefault()
    e.stopPropagation()

    const { type, code } = e

    let action: AndroidKeyEventAction
    switch (type) {
      case 'keydown':
        action = AndroidKeyEventAction.Down
        break
      case 'keyup':
        action = AndroidKeyEventAction.Up
        break
      default:
        throw new Error(`Unsupported event type: ${type}`)
    }

    const keyCode = AndroidKeyCode[code as keyof typeof AndroidKeyCode]

    if (this.control) {
      const controller: ScrcpyControlMessageWriter = this.control.controller
      controller.injectKeyCode({
        action,
        keyCode,
        repeat: 0,
        metaState: 0,
      })
    }
  }
  
  
  
  private injectScroll(el: HTMLVideoElement, e: WheelEvent) {
    e.preventDefault()
    e.stopPropagation()

    if (this.control) {
      const controller: ScrcpyControlMessageWriter = this.control.controller
      controller.injectScroll({
        ...this.getPointer(el, e.clientX, e.clientY),
        scrollX: -e.deltaX / 100,
        scrollY: -e.deltaY / 100,
        buttons: 0,
      })
    }
  }

用户操作绑定

通过绑定事件到video标签上

 private bindVideoEvent(el: HTMLVideoElement) {
    logger.info('bind video event')

    el.addEventListener('pointerdown', (e) => {
      el.focus()
      this.injectTouch(el, e)
    })
    el.addEventListener('pointermove', (e) => this.injectTouch(el, e))
    el.addEventListener('pointerup', (e) => this.injectTouch(el, e))

    el.addEventListener('wheel', (e) => this.injectScroll(el, e))

    el.setAttribute('tabindex', '0')
    el.addEventListener('keydown', (e) => this.injectKeyCode(e))
    el.addEventListener('keyup', (e) => this.injectKeyCode(e))
  }

性能监控

监控的性能包括:cpu\内存\电池\FPS

cpu

基于top命令和读取/proc/stat文件

export const getProcesses = singleton(async (deviceId: string) => {
  let columns = ['pid', '%cpu', 'time+', 'res', 'user', 'name', 'args']
  let command = 'top -b -n 1'
  each(columns, (column) => {
    command += ` -o ${column}`
  })

  const result: string = await shell(deviceId, command)
  let lines = result.split('\\n')
  let start = -1
  for (let i = 0, len = lines.length; i < len; i++) {
    if (startWith(trim(lines[i]), 'PID')) {
      start = i + 1
      break
    }
  }

  // older version of top command
  if (start < 0) {
    const result: string = await shell(deviceId, 'top -n 1')
    lines = result.split('\\n')
    for (let i = 0, len = lines.length; i < len; i++) {
      const line = trim(lines[i])
      if (startWith(line, 'PID')) {
        columns = line.split(/\\s+/)
        columns = map(columns, (column) => {
          column = lowerCase(column)
          if (column === 'cpu%') {
            column = '%cpu'
          } else if (column === 'uid') {
            column = 'user'
          } else if (column === 'rss') {
            column = 'res'
          }
          return column
        })
        start = i + 1
        break
      }
    }
  }

  lines = lines.slice(start)
  const processes: any[] = []
  each(lines, (line) => {
    line = trim(line)
    if (!line) {
      return
    }
    const parts = line.split(/\\s+/)
    const process: any = {}
    each(columns, (column, index) => {
      if (column === 'args') {
        process[column] = parts.slice(index).join(' ')
      } else {
        process[column] = parts[index] || ''
      }
    })
    if (process.args === command) {
      return
    }
    processes.push(process)
  })

  return processes
})

内存

读取系统/proc/meminfo文件

没有做针对app的内存监控 这里是个TODO

async function getMemory(deviceId: string) {
  const memInfo = await shell(deviceId, 'cat /proc/meminfo')
  let memTotal = 0
  let memFree = 0

  const totalMatch = getPropValue('MemTotal', memInfo)
  let freeMatch = getPropValue('MemAvailable', memInfo)
  if (!freeMatch) {
    freeMatch = getPropValue('MemFree', memInfo)
  }
  if (totalMatch && freeMatch) {
    memTotal = parseInt(totalMatch, 10) * 1024
    memFree = parseInt(freeMatch, 10) * 1024
  }

  return {
    memTotal,
    memUsed: memTotal - memFree,
  }
}

电池

基于dumpsys battery 包括电量百分比、电压、温度

async function getBattery(deviceId: string) {
  const result = await shell(deviceId, 'dumpsys battery')

  return {
    batteryLevel: toNum(getPropValue('level', result)),
    batteryTemperature: toNum(getPropValue('temperature', result)),
    batteryVoltage: toNum(getPropValue('voltage', result)),
  }
}

FPS

  1. 通过SurfaceFlinger 服务获取帧率数据

  2. 如果1无法实现 使用latency分析

const getFps: IpcGetFps = async function (deviceId, pkg) {
  let fps = 0

  const result: string = await shell(deviceId, 'dumpsys SurfaceFlinger')
  const match = result.match(/flips=(\\d+)/)
  if (match) {
    const flips = toNum(match[1])
    const time = now()
    const lastFlips = getDeviceStore(deviceId, 'flips')
    if (lastFlips) {
      fps = Math.round(
        ((flips - lastFlips.flips) * 1000) / (time - lastFlips.time)
      )
    }
    setDeviceStore(deviceId, 'flips', {
      time,
      flips,
    })

    return fps
  } else {
    fps = await getFpsByLatency(deviceId, pkg)
  }

  return fps
}

async function getFpsByLatency(deviceId: string, pkg: string) {
  const fps = 0

  if (!pkg) {
    return fps
  }

  const list = await shell(deviceId, 'dumpsys SurfaceFlinger --list')
  const layers = filter(list.split('\\n'), (line) => contain(line, pkg))
  if (isEmpty(layers)) {
    return fps
  }

  const latencies = await shell(
    deviceId,
    map(layers, (layer) => {
      return `dumpsys SurfaceFlinger --latency "${layer}"`
    })
  )
  const allFps = map(latencies, (latency) => {
    latency = trim(latency)
    let fps = 0
    const timestamps: number[] = []
    each(latency.split('\\n'), (line) => {
      const match = line.match(/(\\d+)\\s+(\\d+)\\s+(\\d+)/)
      if (match) {
        timestamps.push(toNum(match[2]) / 1e9)
      }
    })
    timestamps.pop()

    if (timestamps.length > 1) {
      const seconds = last(timestamps) - timestamps[0]
      if (seconds > 0) {
        fps = Math.round((timestamps.length - 1) / seconds)
      }
    }
    return fps
  })

  return max(...allFps)
}

终端的实现

  • 基于xterm.js

  • 自己实现的编解码

class ShellProtocol {
  static STDIN = 0
  static STDOUT = 1
  static STDERR = 2
  static EXIT = 3
  static CLOSE_STDIN = 4
  static WINDOW_SIZE_CHANGE = 5

  static encodeData(id, data) {
    data = Buffer.from(data, 'utf8')
    const buf = Buffer.alloc(5 + data.length)
    buf.writeUInt8(id, 0)
    buf.writeUInt32LE(data.length, 1)
    data.copy(buf, 5)

    return buf
  }

  static decodeData(buf) {
    const result: Array<{
      id: number
      data: Buffer
    }> = []

    for (let i = 0, len = buf.length; i < len; ) {
      const id = buf.readUInt8(i)
      const len = buf.readUInt32LE(i + 1)
      const data = buf.slice(i + 5, i + 5 + len)
      result.push({
        id,
        data,
      })
      i += 5 + len
    }

    return result
  }
}
  • 兼容不同的安卓版本

class AdbPty extends Emitter {
  private connection: any
  private useV2 = true
  constructor(connection: any) {
    super()

    this.connection = connection
  }
  async init(useV2 = true) {
    const { connection } = this

    this.useV2 = useV2
    const protocol = useV2 ? 'shell,v2:' : 'shell:'
    connection.write(Protocol.encodeData(Buffer.from(protocol)))
    const result = await connection.parser.readAscii(4)
    if (result !== Protocol.OKAY) {
      throw new Error('Failed to create shell')
    }

    if (useV2) {
      const { socket } = connection
      socket.on('readable', () => {
        const buf = socket.read()
        if (buf) {
          const packets = ShellProtocol.decodeData(buf)
          for (let i = 0, len = packets.length; i < len; i++) {
            const { id, data } = packets[i]
            if (id === ShellProtocol.STDOUT) {
              this.emit('data', data.toString('utf8'))
            }
          }
        }
      })
    } else {
      const { socket } = connection
      socket.on('readable', () => {
        const buf = socket.read()
        if (buf) {
          this.emit('data', buf.toString('utf8'))
        }
      })
    }
  }
  • 提供了多个右键选择

const onContextMenu = (e: React.MouseEvent) => {
    if (!device) {
      return
    }

    const term = termRef.current!
    const template: any[] = [
      {
        label: t('shortcut'),
        click() {
          setCommandPaletteVisible(true)
        },
      },
      {
        type: 'separator',
      },
      {
        label: t('copy'),
        click() {
          if (term.hasSelection()) {
            copy(term.getSelection())
            term.focus()
          }
        },
      },
      {
        label: t('paste'),
        click: async () => {
          const text = await navigator.clipboard.readText()
          if (text) {
            main.writeShell(sessionIdRef.current, text)
          }
        },
      },
      {
        label: t('selectAll'),
        click() {
          term.selectAll()
        },
      },
      {
        type: 'separator',
      },
      {
        label: t('reset'),
        click() {
          if (sessionIdRef.current) {
            main.killShell(sessionIdRef.current)
          }
          term.reset()
          if (device) {
            main.createShell(device.id).then((id) => {
              sessionIdRef.current = id
            })
            term.focus()
          }
        },
      },
      {
        label: t('clear'),
        click() {
          term.clear()
          term.focus()
        },
      },
    ]

    contextMenu(e, template)
  }

截屏实现

调用device对象的screencap

其底层是向安卓设备发送screencap命令

async function screencap(deviceId: string) {
  const device = await client.getDevice(deviceId)
  const data = await device.screencap()
  const buf = await Adb.util.readAll(data)

  return buf.toString('base64')
}

export default class ScreencapCommand extends Command<Duplex> {
  execute(): Bluebird<Duplex> {
    this._send('shell:echo && screencap -p 2>/dev/null');
    return this.parser.readAscii(4).then((reply) => {
      switch (reply) {
        case Protocol.OKAY:
          let transform = new LineTransform();
          return this.parser
            .readBytes(1)
            .then((chunk) => {
              transform = new LineTransform({ autoDetect: true });
              transform.write(chunk);
              return this.parser.raw().pipe(transform);
            })
            .catch(Parser.PrematureEOFError, () => {
              throw Error('No support for the screencap command');
            });
        case Protocol.FAIL:
          return this.parser.readError();
        default:

日志实现

class Logcat extends Emitter {
  private reader: any
  private paused = false
  constructor(reader: any) {
    super()

    this.reader = reader
  }
  async init(deviceId: string) {
    const { reader } = this

    reader.on('entry', async (entry) => {
      if (this.paused) {
        return
      }
      if (entry.pid != 0) {
        let pidNames = getDeviceStore(deviceId, 'pidNames')
        if (!pidNames || !pidNames[entry.pid]) {
          pidNames = await getPidNames(deviceId)
          setDeviceStore(deviceId, 'pidNames', pidNames)
        }
        entry.package = pidNames[entry.pid] || `pid-${entry.pid}`
      }
      this.emit('entry', entry)
    })
  }
  close() {
    this.reader.end()
  }
  pause() {
    this.paused = true
  }
  resume() {
    this.paused = false
  }
}

网页

对设备上的webview进行检测和分析

  • 基于读取/proc/net/unix 查找webview调试socket

  • 匹配到当前进程的对应的socket

  • 基于forwardTop连接到设备socket

  • 发送请求获取webview

  • 如果应用没有打开webview调试模式,就获取不到

WebView检测基于Android的WebView调试协议。当Android应用启用WebView调试时,系统会创建Unix domain socket用于调试通信。AYA通过以下步骤检测这些WebView:

扫描 /proc/net/unix 文件中的socket列表
匹配特定进程的WebView调试socket
建立端口转发连接
通过HTTP协议获取WebView实例信息
这种实现方式无需root权限,但要求目标应用启用了WebView调试模式。
const getWebviews = singleton(async (deviceId: string, pid: number) => {
  const webviews: any[] = []

  const result: string = await shell(deviceId, `cat /proc/net/unix`)

  const lines = result.split('\\n')
  let line = ''
  for (let i = 0, len = lines.length; i < len; i++) {
    line = trim(lines[i])
    if (contain(line, `webview_devtools_remote_${pid}`)) {
      break
    }
  }

  if (!line) {
    return webviews
  }

  const socketNameMatch = line.match(/[^@]+@(.*?webview_devtools_remote_?.*)/)
  if (!socketNameMatch) {
    return webviews
  }

  const socketName = socketNameMatch[1]
  const remote = `localabstract:${socketName}`
  const port = await forwardTcp(deviceId, remote)
  const { data } = await axios.get(`http://127.0.0.1:${port}/json`)
  each(data, (item: any) => webviews.push(item))

  return webviews
})

远程调试的实现: 选择了指定的webview之后,可以建立ws连接,进行远程调试

  <ToolbarIcon
          disabled={selected === null}
          icon="debug"
          title={t('inspect')}
          onClick={() => {
            let url = selected.devtoolsFrontendUrl
            if (store.webview.useLocalInspector) {
              url = 'devtools://devtools/bundled/inspector.html'
              url += `?ws=${selected.webSocketDebuggerUrl.replace('ws://', '')}`
            }
            main.openWindow(url)
          }}
        />

支持两种模式

  1. 使用webview提供的devtoolsFrontendUrl 直接连接到远程调试服务器

  2. 使用本地chrom devtools连接websocket调试接口

进程管理

  • 基于top命令获取进程信息和各种监控 top -b -n 1

  • 每5s更新一次

布局分析

  1. 生成临时文件路径:在设备的**/data/local/tmp/目录下创建aya_uidump.xml**文件

  2. 执行uiautomator命令:通过ADB shell执行**uiautomator dump**命令,将当前界面的UI层次结构导出为XML文件

  3. 读取文件内容:使用**file.pullFileData**从设备拉取XML文件内容

  4. 返回字符串:将二进制数据转换为UTF-8字符串返回

const dumpWindowHierarchy: IpcDumpWindowHierarchy = async function (deviceId) {
  const path = '/data/local/tmp/aya_uidump.xml'
  await shell(deviceId, `uiautomator dump ${path}`)
  const data = await file.pullFileData(deviceId, path)
  return data.toString('utf8')
}

xml和ui截图的结合

  • 解析xml文件,获取到每个元素的坐标信息

  • 通过比例缩放进行映射

<hierarchy>  
  <node bounds="[0,0][1080,2340]" class="android.widget.FrameLayout" resource-id="android:id/content">  
    <node bounds="[0,72][1080,2340]" class="android.view.ViewGroup" text="Hello World">  
      <node bounds="[100,200][500,300]" class="android.widget.TextView" text="Button"/>  
    </node>  
  </node>  
</hierarchy>

adb

  • github构建时会自动下载adb

  • 自己下载adb二进制文件